Fake news: OpenAI's 'deepfakes for text', GPT2, may be too dangerous to be released
OpenAI -- a company backed by Elon Musk -- has created an artificial intelligence system called GPT2 that's capable of writing fake news. The system is also capable of generating works of fiction, and it has been described as being so dangerous that it may not be publicly released.
The stories written by GPT2 have been called "deepfakes for text" and can be generated by feeding the system just a few words. The style is far more sophisticated than most AI-generated text, and the news stories it can generate are so convincing that there are serious concerns about the potential for misuse.
See also:
- Facebook is ready to fight back against anti-vaxxers
- Health experts call on Facebook to close anti-vaxxer groups
- Zoho launches next generation Office Suite with added AI
The aim of San Francisco-based OpenAI was to create a language algorithm that could be used to answer question, translate text and so on. But, as MIT Technology Review reports, company director Jack Clark said the team "quickly discovered it's possible to generate malicious-esque content quite easily".
The group usually makes all of its projects and research available to the public, but in the case of GPT2, it will only be releasing a simplified version of the tool.
The system was tested by the Guardian, and when it was fed the first line of Nineteen Eighty-Four, it produced plausible continuation of the text. Similarly, when primed with a news story, it was able to generate fake quotes from politicians. This highlights just why GPT2 is not going to be unleashed on the public; at least not for the time being. Clark says:
We need to perform experimentation to find out what they can and can't do. If you can't anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.
Image credit: charles taylor / Shutterstock