Google AI Mode Launches in US, Gemini 2.5 Powers New Features

Google Search AI Mode with Gemini 2.5 launches in the U.S., offering Deep Search, live camera interaction & agentic shopping from I/O 2025, as publisher concerns continue.

Google is significantly advancing its search capabilities with the full U.S. rollout of an enhanced “AI Mode,” now powered by the company’s sophisticated Gemini 2.5 model. Announced at its I/O 2025 conference on May 20, this revamped AI Mode  is more than a simple chatbot.

It’s evolving into an intelligent assistant designed to make Search more interactive and capable, introducing features like “Deep Search” for comprehensive research, “Search Live” for real-time camera-based interaction, and “agentic capabilities” allowing the AI to perform tasks such as booking tickets or assisting with online shopping.

This strategic push aims to transform how users interact with Search, enabling them to tackle complex queries and complete tasks directly within the platform.

The upgraded AI Mode is becoming available to U.S. users via a new tab in Search and the Google app, with no Labs sign-up now necessary. New AI-driven shopping tools, for example, will allow users to virtually “try on” clothes by uploading their own photos and even automate purchases through an “agentic checkout” feature. 

Google executive Elizabeth Reid stated that AI Mode will be the initial platform for introducing Gemini’s “frontier capabilities,” and that as the company gathers user feedback, many of these advanced features will be integrated into the core Search experience.

However, these innovations emerge amidst ongoing dialogue with the publishing industry. Publishers have previously expressed significant concerns regarding the impact of AI-generated summaries on website traffic and revenue.

Some reports, like one from Bloomberg, indicated earlier this year that certain websites saw traffic decline by as much as 70% following the introduction of AI summaries. 

AI Mode: From Experimental Lab to Public Powerhouse

Previously an experimental feature within Google’s Labs program since early March 2025, AI Mode is now being deployed more broadly. A key element of this enhancement is the integration of a custom version of Gemini 2.5,  Google describes most intelligent model, now powering both AI Mode and the existing AI Overviews in the U.S. Google explains that this AI-driven system utilizes a “query fan-out technique,” which involves breaking down complex questions into sub-topics and issuing multiple queries simultaneously to provide deeper web exploration.

The company also noted that its earlier AI Overviews feature has seen considerable user adoption, reportedly driving an increase in Google usage for relevant queries by over 10% in key markets like the U.S. and India.

The path to this more sophisticated AI Mode included incremental updates, such as the expansion of visual input capabilities in April 2025. This update allowed users to upload or take photos to receive AI-generated responses about images by combining the Gemini model with Google Lens technology.

Deeper Research and Live Interactions Redefine Search

Among the new functionalities slated to debut first in Google’s Search Labs is “Deep Search.” Google elaborates that this feature elevates the query fan-out technique, enabling it to issue hundreds of searches and synthesize information to produce expert-level, fully-cited reports in minutes, aiming to significantly reduce research time for users tackling complex subjects.

Further enhancing user interaction, “Search Live” will incorporate capabilities from Google’s Project Astra directly into Search. This will allow users to engage in real-time conversations with Search about what they see through their camera by tapping a “Live” icon in AI Mode or Google Lens. This effectively transforms Search into an interactive learning partner. This live camera-sharing functionality is also planned for the Gemini app on iOS, extending its earlier availability on Android.

AI Turns Agent: Task Completion and Personalized Commerce

Google is also embedding “agentic capabilities,” stemming from its Project Mariner initiative, into AI Mode. “Agentic” refers to AI systems that can understand goals and take sequences of actions to achieve them. This will empower the AI to complete tasks for users, such as finding and comparing event tickets. Google announced it will begin with event tickets, restaurant reservations, and local appointments, partnering with companies like Ticketmaster and StubHub.

The shopping experience within AI Mode is also undergoing a significant transformation. Beyond product discovery, users will be able to virtually try on apparel by uploading a single image of themselves, a feature detailed on the Google Blog.

Google claims its AI model for this “understands the human body and nuances of clothing.” An “agentic checkout” feature will also enable users to instruct AI Mode to purchase an item using Google Pay when it meets a pre-set price. To offer more tailored results, AI Mode will soon provide personalized suggestions based on past searches and an optional connection to other Google apps, starting with Gmail. Furthermore, for queries involving complex datasets, such as sports statistics or financial information, AI Mode will be capable of analyzing the data and generating custom interactive charts and graphs.

While Google promotes these features as enhancing user convenience, the company continues to face scrutiny. Publishers have previously raised concerns about content scraping practices and the integration of ads within AI-generated summaries, fearing negative impacts on direct website traffic and ad revenue.

The Chegg lawsuit, alleging unauthorized use of educational material for AI summaries, underscores these tensions. Google has consistently stated that its AI features are intended to complement traditional search results and has emphasized user control over personal data for new personalization features.

The competitive AI search landscape also remains active, with services like Perplexity AI and OpenAI’s ChatGPT offering alternative AI-driven information retrieval methods.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x