Musk’s Grok AI Got Caught Censoring Criticism About Musk and Trump

Elon Musk's Grok AI has been found to selectively censoring possibly critical answers about Musk and Trump

Elon Musk’s AI chatbot, Grok, has been found to filter answers related to its owner and former U.S. President Donald Trump while treating discussions about other public figures differently.

Investigations revealed that Grok blocked certain critical responses about Musk and Trump, citing misinformation concerns, yet has also generated inflammatory statements about them. These contradictions highlight ongoing concerns about AI development, bias in automated systems, and the influence of corporate interests on digital conversations.

Over the weekend, social media users reported that when asked, “Who is the biggest misinformation spreader?” with the “Think” setting enabled, Grok 3’s “chain of thought” indicated after further exploration that it had been “explicitly instructed not to mention Donald Trump or Elon Musk.” The chain of thought represents the model’s reasoning process as it determines an answer to a given question.

xAI’s head of engineering, Igor Babuschkin, confirmed the issue in a post saying an employee pushed the change to the system prompt because he thought it “would help”. He also said that “once people pointed out the problematic prompt we immediately reverted it” and that “Elon was not involved at any point.”

The inconsistency in how Grok applies its moderation policies has raised concerns about its reliability and the motivations behind its filtering system.

AI Moderation Contradictions and the Free Speech Debate

Grok AI’s approach to content moderation appears inconsistent. While it avoids discussions that could be politically damaging to Musk and Trump, it does not apply the same restrictions across the board. In some cases, it has refused to answer politically charged questions altogether, while in others, it has responded without restrictions.

The new incident happened just days after Grok was found generating a response suggesting that both Musk and U.S. President Donald Trump “deserve the death penalty.” Igor Babuschkin, publicly acknowledged the issue, stating: “Really terrible and bad failure from Grok.”

Shortly after, xAI released an update to prevent the chatbot from generating extreme responses. However, these changes did not address the broader concerns about why Grok selectively applies content restrictions in the first place.

Grok AI’s Role in X’s Business Strategy

Beyond ethical debates, Grok’s moderation choices appear to align with X’s financial interests. The chatbot, which was initially free, is now exclusive to subscribers of X Premium+, doubling the cost of the highest-tier plan. The shift from an open AI tool to a monetized feature suggests that Grok is not just a conversational assistant but a key part of X’s revenue model.

Meanwhile, X is reportedly seeking $44 billion in additional funding, raising further questions about the platform’s financial stability. Given Grok’s role in X’s paid services, its moderation policies may not be purely about preventing misinformation but rather about protecting a product that is increasingly central to the company’s financial success.

Grok AI’s Integration Into X’s Advertising and AI Expansion

Grok AI is not just used for a chatbot; it is now also being integrated into X’s AI-powered ad-generation tools. This positions the AI as a commercial asset beyond user interactions, reinforcing its role in shaping the platform’s broader business strategy. If an AI is simultaneously responsible for moderation and advertising, questions arise about whether content restrictions are implemented to maintain a specific brand image rather than as a purely technical measure.

Musk has consistently framed X as a platform that promotes free speech, often criticizing other AI models for their restrictive content policies. However, Grok’s approach raises concerns about whether its moderation reflects these values or instead serves as a tool for reputation management.

The fact that Grok actively avoided generating responses that could be unfavorable to Musk, while playing an increasing role in X’s monetization efforts, suggests that its function may extend beyond neutrality.

Grok AI’s Moderation Policies Raise Questions About AI Neutrality

The debate over AI neutrality is not unique to Grok. Companies such as OpenAI and Google have faced scrutiny for perceived bias in their models, with accusations that certain narratives are prioritized over others. However, Grok’s case stands out because of Musk’s previous statements against censorship and his criticisms of moderation policies on competing platforms. If an AI developed under Musk’s leadership enforces selective filtering, it contradicts the principles he has publicly advocated.

The contradictions in Grok’s moderation approach raise broader concerns about the governance of AI systems and their increasing influence over public discourse. As AI-generated content becomes more integrated into platforms like X, the ability to control narratives through automated systems becomes a growing issue. If Grok’s selective moderation is any indication, future AI-driven platforms may not just moderate content—they may shape the visibility of certain topics based on the priorities of the companies that operate them.

Table: AI Model Benchmarks – LLM Leaderboard 

[table “18” not found /]

Last Updated on March 3, 2025 11:28 am CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x