Amateur mushroom enthusiasts are being urged to exercise caution when purchasing foraging books on Amazon, as a growing number of these guides are suspected to have been penned by AI chatbots. The emergence of these AI-produced guides has raised concerns among experts, as they may contain misleading and potentially fatal advice.
Warnings from Reputable Organizations
The New York Mycological Society recently took to social media to sound the alarm about the dangers of relying on such guidebooks. Their post emphasized the critical importance of purchasing books from known and reputable authors. Misidentification, they warned, can have dire consequences. This sentiment was further echoed by the Guardian, which reported on suspicious titles such as “Wild Mushroom Cookbook: form [sic] forest to gourmet plate” and “The Supreme Mushrooms Books Field Guide of the South-West”. Both of these titles, among others, were flagged as likely being written by AI.
Evidence of AI Authorship
Originality.ai , a firm that specializes in detecting AI-generated content, examined samples from these books for the Guardian. Their findings were alarming: each sample had a rating of 100% on its AI detection score, indicating a high likelihood that these books were crafted by chatbots. The prose in these books, which included phrases like “The sweet smell of freshly cooked mushrooms wafted through the air,” further raised suspicions.
The problem is exasperated by the fact there is no surefire way to say whether content is generated by AI or not, especially as large language models become more sophisticated and human-like. Even a service like Originality.ai is simply providing a likelihood the content is AI generated. Of course, Originality uses powerful tools to draw its conclusions and a score of 100% is damning even if false positives are possible to a certain percentage.
Potential Dangers Highlighted
The risks associated with misidentifying mushrooms were tragically underscored by a recent incident in Australia. Three individuals lost their lives after consuming what authorities believe were death cap mushrooms. These mushrooms, notoriously difficult to distinguish from edible varieties, have been responsible for fatalities worldwide. Leon Frey, a field mycologist and foraging guide based in the UK, pointed out serious flaws in the AI-written guides, such as the encouragement of using “smell and taste” as identification methods—a practice that can be extremely dangerous.
Amazon’s Response and the Broader Issue
In response to these concerns, Amazon stated their commitment to ensuring a safe shopping and reading experience for their customers and are currently investigating the matter. However, as AI technology continues to advance, the issue of AI-written books and the potential dangers they pose is expected to grow. Jonathan Gillham, the founder of Originality.ai , also expressed concerns about AI-generated travel guidebooks, which could mislead readers into visiting unsafe locations.
There is also a problem that inaccurate AI content will eventually decrease the legitimacy of real authors. If consumers cannot tell which content is accurate or not, they potentially decide not to buy a guide book.
While AI content in itself is not bad, disclosure and issues of accuracy are. If an author is passing work off as their own and the research as their own, but it was written by AI, that could be problematic. Not least because AI chatbots are known to provide inaccurate information. Worse, they can provide inaccurate information but present it logically to make it seem like it is legitimate.
For example, I have stopped using Bing Chat as any sort of proper search/research tool. Microsoft’s chatbot is simply too liberal with the truth, including providing false quotes, making up names, and generally getting minor – but important – details wrong.
Last Updated on November 8, 2024 11:36 am CET