As I reported yesterday, Elon Musk's xAI launched its Grok-2 and Grok-2 Mini, multimodal AI that brings text and image generation into X. As you might expect from an AI model led by Musk, moderation seems to be lax. Grok-2 is already sparking controversy for creating offensive and inappropriate imagery, putting its ethical and regulatory compliance into question.
Controversial Creations and Moderation Issues
Grok's chatbot enables image creation from text prompts, but it has generated troubling pictures, such as public figures in objectionable situations. Examples include images of Taylor Swift in lingerie and Kamala Harris holding a gun. The Verge's testing surfaced other problematic images like Barack Obama appearing to use cocaine and Donald Trump in a Nazi uniform, highlighting Grok's erratic content moderation.
Other AI image creators like OpenAI's tools implement stricter moderation guidelines, avoiding real person imagery, Nazi symbols, and harmful stereotypes. OpenAI also watermarks its outputs to denote they are AI-generated—an approach Grok does not follow. Although bypassing restrictions is occasionally possible, OpenAI typically addresses such issues swiftly.
Elon Musk's Approach and Regulatory Risks
Elon Musk describes Grok as “the most fun AI in the world” and favors minimal content moderation in line with his broader social media policies. However, this leniency is contentious given today's regulatory climate. The European Commission is examining potential breaches of the Digital Services Act by X, and the UK's Ofcom is gearing up to enforce the Online Safety Act, which includes measures for regulating AI concerns.
Grok's launch comes amid heightened regulatory scrutiny. In the US, lawmakers are pushing for legislation on AI-generated content, driven by instances like explicit deepfakes of Taylor Swift. Images generated by Grok, such as those of Kamala Harris and Alexandria Ocasio-Cortez in lingerie, provoke worries about digital sexual harassment and possible real-life repercussions.
The reaction to Grok highlights the necessity for stringent content moderation in AI applications. As US and European regulators ramp up oversight, X's approach to AI image generation will likely come under increased observation. Addressing the ethical and legal complexities of Grok's outputs remains a pressing concern for the platform.