Google has launched ‘Search Live,’ a significant new capability within its experimental AI Mode that allows users to have a free-flowing, back-and-forth voice conversation with its search engine. The feature, rolling out to U.S. users enrolled in the AI Mode experiment, transforms the traditionally text-based search experience into an interactive dialogue, marking a direct challenge to the conversational voice features popularized by rivals like OpenAI’s ChatGPT.
Accessible via a new “Live” icon in the Google app, the feature is designed for on-the-go or multitasking scenarios where typing is impractical. According to the company’s official announcement, a user could ask for tips on packing a suitcase and then ask a follow-up question without re-initiating the search. The system can continue the conversation even if the user switches to another app, and past conversations can be revisited in the AI Mode history.
This move is a critical step in Google’s broader strategy to evolve Search from a list of blue links into a comprehensive “answer engine.” It is powered by a custom version of Google’s Gemini AI model, leveraging advanced voice capabilities to deliver not just links, but synthesized, audible answers.
A More Stable, Capable AI Foundation
The launch of Search Live arrives just a day after Google solidified its artificial intelligence strategy by moving its powerful Gemini 2.5 Pro and 2.5 Flash models into “general availability” for production use. This signals a strategic shift from the rapid, sometimes criticized, experimental sprints of the past toward a more stable and predictable platform for developers. The company has established a clear three-tiered family of models—Pro, Flash, and the new cost-effective Flash-Lite—to give developers options that balance performance, speed, and cost.
This new phase of stability follows earlier criticism from AI governance experts. The rushed rollout of a previous Gemini model prompted one expert from the Center for Democracy and Technology to call it part of a “a ‘troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market.'” To address such concerns, the entire Gemini 2.5 family is built with a controllable “hybrid reasoning” feature, which allows the models to perform deeper, multi-step logical verification before responding. As part of this maturation, Google also updated the pricing for its popular Gemini 2.5 Flash model, simplifying what had been a confusing preview structure.
An Existential Threat to the Open Web?
While Google promotes the convenience of its new AI features, the move pours fuel on an already raging fire of conflict with news and content publishers. The core fear is that as Google provides more direct answers, users will have less reason to click through to publisher websites, cannibalizing the traffic that is the lifeblood of online media. Recent data suggests these fears are well-founded. Analytics from BrightEdge from 2024 indicate that while Google’s AI Overviews have boosted content impressions by 49%, click-throughs on the links within them have fallen by 30%.
The real-world impact is already being felt. WSJ reports that since 2022, traffic from organic search to HuffPost’s desktop and mobile websites fell by just over half and by nearly that much at the Washington Post. This has prompted significant financial pushback. A recent study commissioned by German media rights group Corint Media alleges that Google owes the country’s publishers approximately €1.3 billion annually for using their content.
The group’s co-CEO, Markus Runde, stated, “We consider our calculation to be conservative. The actual value that Google derives from journalistic content is likely to be even higher.” Compounding the issue is the limited control publishers have. Testimony in the U.S. v. Google antitrust case revealed that the standard opt-out tool for Google’s general AI training does not prevent the company from using web content to build its Search features. When a Department of Justice lawyer asked if the search division could train on data that publishers had opted out of, Google DeepMind VP Eli Collins gave a one-word answer that sent ripples through the industry: “Correct — for use in search.”
Placing Ads in the Conversation
Google’s push into AI-driven search is not just a technical evolution but a commercial necessity. The company has already confirmed its plan to integrate advertisements directly into its new AI search products. Ads are set to appear “where relevant” within AI Mode’s conversational outputs and as “Sponsored” content in AI Overviews. Dan Taylor, Google’s vice president of global ads, described the long, exploratory queries in AI Mode as an “expansive opportunity to introduce advertisers—and put them in front of consumers in places where they’re open to discovering new things.”
However, this strategy is not without risk. While new ‘conversational ad formats’ will emerge, they may not be as effective for advertisers. As conversational ads formats will emerge, the lack of incentive for users to click through to a website will likely lead to lower clickthrough rates and could even result in fewer ad serving opportunities and increased competition for available spaces.
This concern is shared by some advertisers, which will need to get creative to capture attention and pull users away from the conversation in order to be effective. For publishers, the introduction of ads into a format that already reduces traffic is a double blow, with a MonetizeMore analysis suggesting that revenue per thousand impressions could fall by 30-50% for queries that trigger an AI response.
From Blue Links to a Multimodal Companion
Search Live is the latest and most interactive step in a clear strategic progression that was previewed at Google I/O in May. The journey began with text-based AI Overviews and recently expanded with a separate “Audio Overviews” experiment that turns search results into a short, podcast-style summary. The launch of Search Live fulfills a key promise made when the upgraded AI Mode was rolled out more broadly in the U.S. in May.
However, for all its conversational prowess, the technology currently has limitations, as it is entirely cloud-based. Because the AI has no access to calendar, email, messages, it cannot yet act as a true personal assistant by performing on-device tasks like sending a text or making a call. In her announcement, Google’s Liza Ma explained how Search Live uses AI Mode’s “query fan-out technique” to break down complex questions and search the web more broadly, enabling new opportunities for exploration.
Looking ahead, Google plans to bring even more “Live” capabilities to AI Mode, including integrating a user’s phone camera to allow them to ask questions about what they are seeing in real time.
Google is rapidly building a powerful, multimodal AI companion that aims to answer any question through text, voice, or visual input. This creates a formidable and convenient user experience, but it does so by intensifying the existential conflict with the publishers whose content has long fueled the open web. The looming battles over monetization and fair compensation for data will likely define the economics of online information for the next decade.