Google’s Bard chatbot is supposed to learn from human feedback, but its workers say they are pressured to work fast and not worry about quality. Bard often produces wrong information because its workers don’t have enough time to check the facts, one of them told The Register.
Bard is a language model that can produce content based on the user’s input and feedback. Google says that Bard uses artificial intelligence and natural language processing to create personalized and immersive experiences for users.
To make Bard more accurate, Google hires workers to evaluate the bot’s responses and give feedback. This feedback is supposed to help Bard improve its future answers. But Ed Stackhouse, a contractor who works for Appen, a data services provider hired by Google, says workers don’t have enough time to do their job properly.
Reviewers of Bard content must read a prompt and Bard’s responses, then search online for the information, and then write notes that effectively fact check the chatbot.
“You can be given just two minutes for something that would actually take 15 minutes to verify,” Stackhouse says. If that’s the case, it means the improvement of Bard my not be the main goal for Google, and that the AI will improve more slowly and continue to be inaccurate.
Stackhouse discusses an example of Bard describing a business: “You would have to check that a business was started at such and such date, that it manufactured such and such project, that the CEO is such and such,” he said. With so many facts to verify, the checkers often run out of time.
Can We Really Rely on Chatbots?
Let’s be clear, this is not a problem that is unique to Bard. Or more specifically, inaccuracies are not a problem exclusive to Google’s AI. As a frequent user of Microsoft’s Bing Chat, I am becoming increasingly frustrated by the inaccuracies it surfaces. Worse is that Bing will often double down when it is incorrect and get into a stream of backing up its misinformation with even more.
Google can somewhat hide behind the fact that Bard is experimental and is not specifically a search tool. Bing Chat is also experimental, but it is fully designed to be a search tool. At the moment, it does not function well enough to become a proper search alternative. Not least because the inaccuracies are so rife that you never know if what Bing is telling you is the truth.
It is unclear if Microsoft uses similar outside contractors as Google to verify Bing Chat information. However, Stackhouse says companies are underestimating fact-checking and AI will keep spreading misinformation unless changes are made.
Last Updated on November 8, 2024 12:40 pm CET