Google is substantially expanding access to its Gemini Nano on-device artificial intelligence model, making it available to third-party Android app developers through a new suite of ML Kit GenAI APIs. This development, detailed in recently published Google developer documentation and slated for a formal unveiling at the Google I/O 2025 conference, aims to embed sophisticated AI functionalities directly into a wider range of Android applications. The move promises users more intelligent and responsive apps capable of tasks like text summarization and image description, all while processing data locally to enhance privacy and enable offline use.
The new APIs are designed to allow developers to harness the power of Gemini Nano for common tasks through a simplified interface, as stated in Google’s official materials. This initiative is built upon AICore, an Android system service that facilitates the on-device execution of foundational AI models. According to Google, this architecture not only enhances app functionality but also bolsters user privacy by keeping data processing local. This represents a significant step up from the earlier, more restricted experimental AI Edge SDK, which primarily offered text-only Gemini Nano access to Pixel 9 series devices.
Enhanced AI Capabilities Coming to Android Apps
The ML Kit GenAI APIs, currently in a beta phase, will empower developers to integrate several key AI-driven features. Applications will be able to summarize articles or chat conversations into concise bulleted lists, a feature initially supporting English, Japanese, and Korean. A proofreading function aims to refine short content by improving grammar and correcting spelling errors across seven languages, including English, German, and Spanish.
Furthermore, a “Rewrite” capability will allow apps to rephrase short messages in various tones or styles, such as “Elaborate,” “Emojify,” or “Professional,” available in the same seven languages as the proofreading tool. The APIs also introduce support for image input, enabling an image description feature that can generate short textual descriptions of visuals, initially in English. This expansion of on-device AI features positions Google competitively with offerings like Apple Intelligence and Samsung’s Galaxy AI.
Developer Considerations and On-Device Architecture
Developers leveraging these new tools should be aware of their beta status. Google’s documentation explicitly states the API “is not subject to any SLA or deprecation policy,” and that “changes may be made to this API that break backward compatibility.” The underlying AICore system service manages the distribution and execution of GenAI models like Gemini Nano, allowing multiple apps to share a single model instance.
This local processing approach offers benefits such as offline functionality and no per-call server costs for developers. According to Android Developers documentation, “Gemini Nano allows you to deliver rich generative AI experiences without needing a network connection or sending data to the cloud. On-device AI is a great solution for use-cases where low latency, low cost, and privacy safeguards are your primary concerns.”
AICore also enforces an inference quota per app, and exceeding this can result in an ErrorCode.BUSY
response. A significant current limitation is that GenAI API inference is permitted only when the app is the active foreground application; background usage is not supported. Initial device compatibility, while planned for expansion, is currently focused on high-end smartphones. The list includes Google’s Pixel 9 series, Samsung’s Galaxy S25 line, Xiaomi’s 15 models, and other flagship devices from manufacturers such as Honor, Motorola, and OnePlus, highlighting the substantial on-device computational power required.
Evolution of Gemini Nano On-Device
The broader rollout for third-party developers builds upon Google’s earlier efforts to integrate Gemini Nano into its own ecosystem. For instance, in May 2024, Google announced the integration of its compact AI model into the Chrome desktop client, with the goal of powering features like “help me write” in Gmail and enhancing the Chrome DevTools console with AI-driven debugging.
The Gemini family of models—Nano, Pro, and Ultra—was first introduced by Google in December 2023. Gemini Nano was specifically engineered for efficient on-device operations, with an early Google Blog post from May 2024 noting that Gemini Nano with Multimodality would enable Pixel devices to process text, sights, sounds, and spoken language directly on the device.
The Google I/O 2025 session description for the expansion through the ML Kit GenAI APIs reads: “Learn to build on-device gen AI with Gemini Nano, with a priority on user privacy and offline functionality. We’ll talk about how to think through on-device use-cases for your app, and introduce a new set of generative AI APIs that harness the power of Gemini Nano.” The ML Kit GenAI marks a pivotal step in making these powerful, privacy-centric AI capabilities more widely accessible across the Android platform.