HomeWinBuzzer NewsMicrosoft's AI Controversy: A Deceased NBA Player Branded "Useless"

Microsoft’s AI Controversy: A Deceased NBA Player Branded “Useless”

The recent Microsoft News AI blunder has raised concerns about the reliability and sensitivity of AI-generated content.


's portal is under scrutiny after publishing an AI-generated article that labeled the recently deceased NBA player, Brandon Hunter, as “useless.” The original piece, which appeared on a sports news site named “Race Track,” went viral, leading to widespread criticism.

The article, which was seemingly AI-generated, described the 42-year-old former NBA player's passing in a distasteful manner, with phrases like “handed away” and “performed in 67 video games.” This incident has raised concerns about the reliability and sensitivity of , especially when dealing with sensitive topics like obituaries.

Previous AI Mishaps at Microsoft

This isn't the first time Microsoft News AI initiatives have come under fire. In August 2023, an AI-generated travel guide on the MSN news portal bizarrely recommended a local food bank in Ottawa, Canada, as a tourist attraction. Despite these repeated blunders, Microsoft continues to rely heavily on AI-generated content.

In 2020, the company even dismissed its team of MSN journalists, leading to a surge in AI-generated articles on the platform. Some of these articles have been criticized for being factually inaccurate, poorly composed, or even plagiarized.

The Broader Implications of AI in Journalism

The recent controversies surrounding Microsoft's AI-generated content highlight the broader challenges and implications of integrating AI into . While AI can generate content rapidly and at scale, it lacks the human touch, sensitivity, and judgment required for certain topics.

The incident with Brandon Hunter's obituary serves as a stark reminder of the potential pitfalls of relying too heavily on AI for content generation. As AI continues to play a more prominent role in journalism, it's crucial for publishers to strike the right balance between automation and human oversight to maintain trust and credibility.

Microsoft leverages the same generation that underpins the AI search chatbot. This may be problematic because Bing Chat is a very unreliable source of information. I have stopped using  as any sort of proper /research tool. Microsoft's chatbot is simply too random with the truth, including providing false quotes, making up names, and generally getting minor – but important – details wrong. 

Bing Chat also doubles down on its inaccuracies, argues, and becomes very confused easily. This is important because it means users have to micro-manage the bot, use very clear prompts, and be willing to fact check everything it produces. This is time consuming and not very efficient for what is supposed to be a search tool.

If the results of searches cannot be relied upon, what is the point in using Bing Chat? It is a similar situation with using it for any sort of content creation. If it is necessary to fact check everything and use multiple prompts, are these tools really helping us?

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News