Researchers from Google, Duke University, and several media organizations have unveiled a comprehensive study detailing the rapid rise of AI-generated misinformation. The analysis, published in a preprint, introduces a vast dataset of misinformation fact-checked by websites like Snopes, dating back to 1995.
The researchers caution that the problem of AI-generated misinformation may be more severe than reported. The ease of producing AI-generated disinformation far outstrips the effort required to fact-check it, leading to an endless stream of incorrect and sometimes hazardous AI-generated responses going viral on social media.
Massive Misinformation and Fact-Checking Challenges
The study's dataset, encompassing 135,838 fact checks, reveals that AI-generated images have become almost as prevalent as traditional forms of content manipulation. This surge in AI-generated misinformation coincides with the release of new AI image-generation tools by tech giants such as OpenAI, Microsoft, and Google itself.
What first started with awkward responses from ChatGPT quickly evolved into high-quality deepfakes. The implementation of state-of-the-art generative AI image features into Adobe Photoshop/Lightroom should only accelerate this surge of deepfakes that are hard to identify. Google´s recently launched AI Overviews in search add blatantly false answers from the leading search engine. You can check out many of those “AI fails” in the subreddit aifailedme.
Google AI Overview recommends drinking large amounts of urine for kidney stones
byu/WinBuzzer inaifailedme
Fact-checkers from Snopes, Politifact, and other websites have flagged a substantial increase in AI-generated disinformation. The researchers note that while AI-generated images were a minor issue until early last year, they have since become nearly as common as text-based misinformation. The dataset indicates that the majority of claims have emerged after 2016, following the introduction of ClaimReview, a tagging system for flagging disinformation on platforms like Google, Facebook, and Bing.
The Impact of AI Hype: Hoaxes on the Rise
The rise in AI-generated misinformation has been paralleled by a wave of AI hype, which may have influenced the focus of fact-checking websites. However, the study also shows that fact-checking AI images has slowed down in recent months, with traditional text and image manipulation seeing an increase.
The research also highlights the prevalence of video hoaxes, which now make up roughly 60 percent of all fact-checked claims that include media. This trend underscores the growing challenge of combating misinformation across various forms of media.
Real-World Consequences and Ethical Concerns
AI-generated misinformation has had tangible effects, from fake nude images of celebrities like Taylor Swift to misleading photos of public events. For instance, fake photos of Katy Perry attending the Met Gala fooled observers on social media and even the star's own parents. These incidents illustrate the ease with which AI can create convincing yet false content.
Kate Perry in a safe and stunning #MetGala pic.twitter.com/ybpKvxmoIt
— Blush and Banter (@blushandbanters) May 7, 2024
Sasha Luccioni, an AI ethics researcher at Hugging Face, emphasizes the difficulty in keeping track of the numerous examples of AI misinformation. The rise of AI-generated content has posed significant challenges for social media companies and search engines like Google, which have had to contend with fake celebrity images prominently featured in search results.