US – OpenAI-powered ChatGPT has seemingly limitless communicative powers and now a group of researchers have asked the US government to restrict them.
Launched in November 2022, the chatbot has been used to write covering letters, dating app messages, poetry and pass an MBA exam.
But now researchers from OpenAI, Stanford and Georgetown Universities are concerned that as generative language models become more accessible they could be used to spread disinformation. The researchers wrote a study outlining potential threats and called on the US government to pursue a range of curtailments, including access controls, hardware export restrictions and media literacy campaigns.
‘We don’t want to wait until these models are deployed for influence operations at scale before we start to consider mitigations,’ says Josh A Goldstein, one of the lead authors of the report.
In Anti-provocation Platforms we talked about how the internet is increasingly seen as a civic space. For it to uphold values of civility, those that create tools for use online must be diligent, as these researchers are about protecting our mutual online spaces.
Strategic opportunity
With popular social media platforms becoming ever more unpredictable, consumers are burnt out on online negativity. They want to know that there are online spaces that are trustworthy. Be unwavering in your commitment to keeping your online spaces safe and focused on their intended purposes.