Microsoft has taken steps to refine the responses generated by its artificial intelligence service, Copilot, after the AI produced fictitious press statements concerning the death of Russian political prisoner Alexei Navalny, falsely attributed to Vladimir Putin. Navalny, who passed away on February 16 while serving a combined sentence of over 30 years for charges related to extremism and fraud, has been a central figure of opposition against the Russian government.
Copilot's error came to light following a request from a journalist at Sherwood Media for a news article about Navalny's demise, leading to fabricated statements from U.S. President Joe Biden and a counter-response from Putin, which were promptly debunked.
This is hardly anything new for Copilot, which was previously Bing Chat. Since its initial launch a year ago, the chatbot has been a frequent offender for misinformation. My personal experience using Bing Chat/Copilot has been frustrating. Misquotes, incorrect information, and an inability to handle numbers properly are all common occurrences.
Google Gemini Facing Misinformation Problems
Google Gemini – previously Bard – can be just as bad. Last week, Google was forced to pause AI image generation on Gemini. The company identified that Gemini disproportionately depicted people with darker skin tones when requested to generate images of individuals in both historical contexts and fictional scenarios. In response, Google announces the temporary suspension of this feature to prevent the propagation of these biases.
Gemini is also facing issues with accurate historic information. For example, users highlighted instances where Gemini depicted a Native American man and an Indian woman as an 1820s-era German couple, an African American Founding Father, and Asian and indigenous soldiers as members of the 1929 German military. Jack Krawczyk, Google's Senior Director of Product overseeing Gemini, acknowledges the inaccuracies and assures users that the team is actively working to rectify these issues.