HomeWinBuzzer NewsWorld of Warcraft Players Troll Content Bot, Highlighting Dangers of AI-Generated Articles

World of Warcraft Players Troll Content Bot, Highlighting Dangers of AI-Generated Articles

World of Warcraft players trolled an AI content bot by submitting intentionally misleading or nonsensical posts, which the bot then dutifully turned into articles.

-

AI content – we're constantly being told – is the future of online information. However, this last week has shown that the chatbots are not quite ready to take over just yet. Gaming blog/site zleague.gg has been using an AI bot to farm content and then the “editorial” team uses AI to create content from it. Thanks to World of Warcraft players, the tricky deed has been exposed and zleague.gg has been given a healthy dose of its own medicine.

The folks on the subreddit r/wow – the premier World of Warcraft sub on – noticed that articles were appearing on zleague.gg that were based on posts from the subreddit. It is unclear which AI the outlet is using, but redditors on r/wow noticed the articles were of poor quality and unnatural.

For example, the site had an article titled “Should I create all by WoW characters on the same realm? Player responds.” If you spend any time on Reddit, you will know that is about as classic as a Reddit post title could be.

It seems zleague.gg has been manually checking gaming subreddits and then feeding AI the title and content. The site is then using the AI to generate an article based on the content. While it can be hard to detect AI-generated articles, zleague.gg did not even make the minimum effort to mask its ploy.

WoW Redditors Hit Back with Trollin

For example, if you feed a chatbot such as Bing Chat, ChatGPT, or Google Bard content and ask it to generate an article, it can do it easily. However, you will also have possible inaccuracies or robotic sentence structure and word choice. Of course, it is possible to cover tracks by editing the AI content, but zleague.gg wasn't even doing that.

And for a while it was working. The website was churning out thousands of articles per day. Even that workload should have been enough for readers to realize the site was using AI. It is worth noting there is nothing inherently wrong with sourcing news from Reddit. That was not really zleague.gg's plan, though. The outlet was simply attempting to push as much content out to get promoted on Google Search.

Realizing the play, the WoW crowd on Reddit decided to hit back and troll the rogue gaming news site. Reddit user u/mataric made a post saying the community is not against AI content as a rule, but is against low effort content that is using AI to pass off other people's work as original. Following this post, other members started shitposting content to see if zleague would bite.

As you might expect, it did bite, and the outlet started “writing” articles based on the shitposts. The outlet even reproduced an article based on the post that was calling them out for reproducing articles. Oh, the irony.

AI Content and the Growing Concern

There's no doubt that this is all quite funny, but there is also something serious happening. Zleague has shown that it is easy to essentially build a news website, use AI to populate it quickly, and get it ranked on Google Search results. It also shows that AI can be used to peddle fake news.

Originality.AI – a service that checks content to say if it is AI generated – recently reported on instances where Google is blocking sites it suspects are using AI content from its AdSense service. However, there is still not a 100% guaranteed way to detect AI content over human-written content.

It is no doubt interesting that Google is willing to take a strict approach. However, the email sent to a content creator that Originiality.AI mentions sees Google reference “automatically generated content”. It is the sort of vague term that will likely lead to plenty of issues down the line. With no sure way to detect AI content, it is possible Google will be flagging legitimate outlets for using AI.

At the same time, Google may be risking looking like a hypocrite. This week, the company confirmed development of its Genesis AI model. Google Genesis is a generative AI tool, which means it can create new content from existing data. It uses natural language processing and machine learning techniques to analyze and synthesize information from different sources, such as websites, social media, and databases. It can then generate news articles that are relevant, accurate, and engaging. 

Is Google on one hand clamping down on AI content while on the other promoting it directly with a tool that could completely transform and automate journalism? I wrote about this concern and the dangers of a content exodus when Bing Chat launched in February. One way or another, online content is changing and trying to sort the good from the bad is the big challenge we all face.

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon