OpenAI’s ChatGPT o3-mini Reasoning Model Now Reveals More Details About It’s Thinking

OpenAI has expanded o3-mini’s reasoning display in ChatGPT, revealing more of its decision-making while keeping key details proprietary.

OpenAI is making its o3-mini model more transparent in ChatGPT, introducing an enhanced reasoning display that reveals more about how the AI reaches conclusions.

The move follows growing competitive pressure, particularly from DeepSeek’s R1 model, which fully exposes its decision-making steps. Unlike DeepSeek, OpenAI is stopping short of full transparency, opting instead to present structured summaries rather than detailed reasoning chains.

The change applies to both free and paid ChatGPT users, with those using high-reasoning mode—a setting that prioritizes logical verification—experiencing a clearer breakdown of how the model arrives at its answers. While this update improves usability, it also signals OpenAI’s attempt to balance AI explainability with concerns over competitive advantage.

OpenAI’s decision comes at a time when greater AI transparency is becoming a major industry trend. Hugging Face recently introduced Open DeepResearch, a free, open-source alternative to OpenAI’s premium Deep Research assistant.

Meanwhile, Google expanded its Deep Research capabilities within Gemini AI, bringing its research-focused AI tool also to the Gemini app for Android. These developments highlight a shift toward AI models that emphasize transparency and accessibility.

OpenAI’s Explanation for the Change

Noam Brown, OpenAI researcher for multi-agent reasoning clarified on X that the updated Chain-Of-Thought (CoT) thinking output is not exposing every step of the process.

Rather than showing the full unfiltered reasoning process, OpenAI is applying a post-processing step to simplify explanations and remove potentially unsafe content.

“To improve clarity and safety, we’ve added an additional post-processing step where the model reviews the raw chain of thought, removing any unsafe content, and then simplifies any complex ideas,” an OpenAI spokesperson shared with TechCrunch.

This method provides clearer AI-generated reasoning while allowing OpenAI to maintain control over its proprietary models. It also ensures that explanations remain accessible to non-English speakers, with translated reasoning chains available in multiple languages.

OpenAI o3-mini benchmark performance on Competition Math (Source: OpenAI)

Competitive Pressures Are Forcing More Transparency

OpenAI’s decision to tweak o3-mini’s reasoning display is driven in part by the rapid evolution of AI research tools.

Competitors like DeepSeek are taking a more open approach, fully revealing their model’s logic step by step. Some researchers argue this improves trust, particularly in fields such as academic research, law, and medicine, where AI-generated outputs must be verifiable.

The rise of open-source AI models is also a factor. Hugging Face’s Open DeepResearch offers a free alternative to OpenAI’s $200-per-month costly Deep Research tool as part of ChatGPT Pro, emphasizing transparency and community-driven development.

Even within OpenAI’s ecosystem, demand for greater transparency is growing. In a recent Reddit AMA, OpenAI’s Chief Product Officer Kevin Weil addressed this topic directly: “We’re working on showing a bunch more than we show today — [showing the model thought process] will be very, very soon. TBD on all — showing all chain of thought leads to competitive distillation, but we also know people (at least power users) want it, so we’ll find the right way to balance it.”

OpenAI CEO Sam Altman addressed the company’s stance on AI transparency and open-source practices. When asked about adopting approaches similar to DeepSeek’s, which showcase AI reasoning steps, Altman responded, “Yeah we are gonna show a much more helpful and detailed version of this, soon. Credit to R1 for updating us.”

Regarding OpenAI’s historically proprietary approach, Altman acknowledged the need for change, stating, “I personally think we have been on the wrong side of history here and need to figure out a different open source strategy.” He added that while not everyone at OpenAI shares this view, discussions are underway to align more closely with open-source principles.

Kevin Weil also hinted at OpenAI’s broader AI development priorities, saying “More agents: very very sooooooon. I think you’ll be happy.”​

When a Reddit user asked about OpenAI’s AI training methodology. Kevin Weil explained that OpenAI uses iterative training on previous models:

“o3 (reasoning models) generate new answers. u then train a GPT-4o model on those new answers. Is this distillation? Is this what’s being done to train GPT-5?”

OpenAI o3 benchmark AIME 2024 + GPQA Diamond (Source: OpenAI)

The Trade-Off Between Transparency and Proprietary Protection

OpenAI’s hesitation to fully reveal AI reasoning stems from a concern known as competitive distillation. If OpenAI were to expose every reasoning step, competitors could analyze its models and replicate their strengths, accelerating AI development in rival companies.

By presenting only a structured summary of its reasoning process, OpenAI retains control over its intellectual property while offering improved explainability.

This balancing act reflects broader industry tensions between transparency and competitive secrecy. While companies like DeepSeek prioritize full visibility, OpenAI’s approach aligns more closely with businesses that integrate AI into proprietary products.

Many enterprise clients, for instance, require both explainability and model security to protect sensitive data and maintain compliance with privacy regulations.

OpenAI o1 Pro mode benchmarks official
Benchmarks of OpenAI’s o1 Pro mode in ChatGPT (Image: OpenAI)

How This Affects ChatGPT Users

For everyday ChatGPT users, the update means responses from o3-mini will include more structured explanations of how the model arrived at an answer. However, this will not fundamentally change how the model operates—reasoning processes remain behind the scenes, with only select parts visible.

The biggest beneficiaries are likely to be researchers, analysts, and technical users who rely on AI models for in-depth analysis. More detailed reasoning chains may make it easier to identify logical inconsistencies, verify AI-generated content, and troubleshoot model behavior in cases where ChatGPT provides unexpected answers.

At the same time, OpenAI’s update does not fully satisfy calls for greater AI explainability, particularly in professional and academic settings. Critics argue that partial transparency may still obscure potential biases and errors, limiting the model’s reliability in situations that require full accountability.

Beyond competitive dynamics, OpenAI’s decision to modify o3-mini’s reasoning display also reflects growing regulatory scrutiny around AI transparency. Policymakers in both the European Union and the United States are considering stricter requirements for explainability in AI models, particularly those used in high-stakes applications like healthcare, finance, and legal analysis.

The EU’s AI Act, which is expected to set a global standard for AI governance, includes provisions that would require companies to disclose how AI models generate decisions.

OpenAI and its competitors could face legal mandates to increase transparency beyond what is currently offered. Meanwhile, U.S. regulators are exploring similar measures, particularly regarding AI accountability in sectors where automated decision-making impacts consumer rights and public safety.

For now, OpenAI’s o3-mini update signals a shift toward greater visibility—but with clear limitations. As the AI industry evolves, the question of how much transparency is enough remains unanswered.

Table: AI Model Benchmarks – LLM Leaderboard 

[table “18” not found /]

Last Updated on March 3, 2025 11:32 am CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x