Google has made its most advanced AI model, Gemini 2.5 Pro (Experimental), available to all users of the free Gemini web app—quietly replacing the older version and removing its previous paywall.
The update began rolling out on March 29, with no formal blog post or press release. Instead, users noticed the change inside the Gemini web app itself, where responses were now attributed to “Gemini 2.5 Pro (exp)” per default for some. Free users can now select Gemini 2.5 Pro via the model selector.
The company later confirmed the rollout on X, writing: “The team is sprinting, TPUs are running hot, and we want to get our most intelligent model into more people’s hands asap.” That urgency points to a shift in strategy—Google is no longer restricting its top-tier AI behind a paywall, but pushing it to everyone, for free.
Gemini 2.5 Pro is taking off 🚀🚀🚀
— Google Gemini App (@GeminiApp) March 29, 2025
The team is sprinting, TPUs are running hot, and we want to get our most intelligent model into more people’s hands asap.
Which is why we decided to roll out Gemini 2.5 Pro (experimental) to all Gemini users, beginning today.
Try it at no… https://t.co/eqCJwwVhXJ
From Premium Access to Public Release in Under a Week
Just five days earlier, on March 25, Gemini 2.5 Pro was made available exclusively to paying subscribers of Gemini Advanced and users of Google AI Studio. That version first was limited to the $19.99/month Google One AI Premium plan. By the end of the week, the experimental version of that same model became the default for everyone using the Gemini app, including those on the free tier.
This abrupt expansion suggests multiple motivations: scaling adoption, accelerating feedback cycles, and positioning Gemini as a true competitor to models from OpenAI, Anthropic, and xAI. It also reflects confidence in the model’s performance and real-world readiness, even in an experimental form.
A Reasoning-Focused Model With New Tradeoffs
Gemini 2.5 Pro’s biggest shift lies in how it thinks. Unlike traditional generative models that rely on single-pass predictions, this model performs multi-step logical verification to strengthen its reasoning.
That large context window enables Gemini to process entire books, legal contracts, or codebases in one go. On the MRCR 128K benchmark, which tests comprehension across long-form content, Gemini achieved 91.5% accuracy and retained 83.1% performance at full scale—significantly ahead of GPT-4.5’s 36.3%.
Google’s model also ranks highly in science reasoning, scoring 84% on the GPQA Diamond benchmark and topping the LMArena leaderboard by nearly 40 points.

On math-focused tests like AIME 2024, Gemini reached 92.0% accuracy, beating GPT-4.5 (36.7%) and DeepSeek R1 (79.8%). For multimodal tasks involving both text and images, it scored 81.7% on the MMMU benchmark—again ahead of Claude 3.7 Sonnet and GPT-4.5.
However, performance varies across categories. Gemini scored 52.9% on SimpleQA, a factual recall test, trailing GPT-4.5’s 62.5%. In autonomous software engineering scenarios (agentic coding), Claude 3.7 Sonnet still leads with 70.3%, while Gemini trails at 63.8%.

Developer tools and multimodal features
While OpenAI’s o3-Mini High leads in live code generation (74.1% on LiveCodeBench), Gemini 2.5 Pro holds its own at 70.4%. It performs even better in code editing tasks. On the Aider Polyglot benchmark—designed to test multilingual code modification—Gemini scores 74.0%, edging out Claude and DeepSeek’s latest models.
Gemini’s native support for multimodal inputs means it can process images, video, code, and text in the same query. That capability extends to tools like Gemini Live, which allows screen and camera-based assistance. Google has also integrated Gemini into Workspace tools like Gmail, Docs, and Drive—providing smart summaries, email search enhancements, and document navigation aids.
Android takeover and user response
Gemini’s expansion goes beyond just improved models. It’s also replacing Google Assistant as the default voice AI on Android devices. The change has drawn mixed reactions. Some users have praised Gemini’s performance in tasks like coding and research.
Others have criticized the shift, citing missing Assistant features and less intuitive voice interactions.
Meanwhile, Google has made Gemini’s AI capabilities more accessible inside Workspace tools, including Gmail and Docs. Gemini is also incorporating search history for personalized responses—hinting at future overlaps between AI and traditional search. Whether this strategy pays off will depend on how users respond.