Google,

MOUNTAIN VIEW, Calif. — The theme was simple at Google I/O 2025 on Tuesday: AI, AI, and — in case you missed it — more AI.

Artificial intelligence (AI) is seeping into every pore of Google’s products. During a two-hour product marathon, Alphabet Inc.’s parent company opened a fire hose of news around Gemini and multimodal AI system Project Astra. Case in point: A new tab in Google search, called AI Mode, lets customers in the U.S. post longer queries (up to 10 at a time) and follow-up questions. A custom image generation model trained specifically for fashion offers visual try-ons through search.

AI Unleashed 2025

The Smorgasbord of news at times was a bit overwhelming, as Google execs ping-ponged from one cool consumer app to another in no particular order or overarching narrative. But by the time the dust had settled at the outdoors Shoreline Amphitheatre, industry experts concluded Google got a AI makeover and Gemini had been made into an AI operating system.

“I learned that today is the start of Gemini season,” Alphabet CEO Sundar Pichai joked during his keynote speech at Shoreline Amphitheater here Tuesday morning. “Not really sure what the big deal is. Every day is Gemini season here at Google.”

Gemini Ultra (U.S. only), Google said, promises the “highest level of access” to Google’s AI-powered apps and services, for about $250 a month. It includes Google’s Veo 3 video-generating AI model (now available), a new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode, which hasn’t launched.

Gemini 2.5 Pro Deep Think, an enhanced reasoning mode, was teased. Though Google didn’t provide details, the thinking is that Deep Think could be similar, if not exceed, OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a problem.

Google’s goal is to make Gemini a “world model” that DeepMind CEO Demis Hassabis said is capable of making “plans and imagine new experiences by simulating aspects of the world just like the brain does.”

“A universal AI system will perform everyday tasks for us,” Hassabis said on stage. “It will take care of mundane admin and surface delightful new recommendations, making us more productive and enriching our lives.”

Among the fusillade of news:

— Google entered the glasses fray: Under its Android XR project, it said it is partnering with Xreal on its first spectacles to run an augmented reality version of its operating system. Project Astra, developed within Google DeepMind to showcase nearly real-time, multimodal AI capabilities, will be showcased in glasses via partners Samsung and Warby Parker. Google, however, did not share a launch date.

— Google showed something called “Project Mariner” that adds AI agentic functionality to the Gemini app for things like searching for an apartment.

— Google Beam, an AI-first video communication platform, uses multiple cameras to capture different angles, rendering 2-D video streams into 3-D visuals. Live speech translation in Google Meet allows English-to-Spanish (and vice versa) in a demo highlighting a vacation rental scenario.

— Veo 3 and the forthcoming Imagen 4 AI image generator will be used to supercharge Flow, Google’s AI-powered video tool geared for filmmakers.

— Google Cloud said enhancements to its Agent2Agent interoperability protocol allow agents across systems and platforms to communicate. The service is gaining adoption through partners such as Microsoft and PayPal, according to Google.

— Google AI Studio can help users code just by inputting text directions via Jules, a new asynchronous coding agent that is in public beta.

Google’s new asynchronous coding agent, Jules, set to compete with OpenAI’s Codex and Microsoft’s Copilot, could upend the entire tech space, says David Bader, distinguished professor and director of the Institute for Data Science at New Jersey Institute of Technology.

“With Google’s Jules now publicly available in beta, OpenAI’s recently launched Codex, and Microsoft’s enhanced GitHub Copilot, we’re witnessing a fundamental shift from passive code suggestion to autonomous task execution,” Bader said in an email. “These tools aren’t just completing code anymore — they’re planning multi-step solutions, spawning virtual environments, and managing entire development workflows with minimal supervision.”

The two-day summit here arguably highlighted the biggest week in AI news this year, with announcements from Google, Microsoft Corp., Dell Inc., IBM Corp.’s Red Hat Inc. and SAP, among others.

Obvious parallels are with Microsoft, which on Monday unleashed tools and platforms that collectively propel an open AI ecosystem, called Agentic Web, based on the model context protocol (MCP). Microsoft and Google simultaneously hosted their flagship developer conferences with a focus on courting developers to build AI apps/use cases on their platforms.

“To this point Google is not just watching AI from a treadmill…but instead in our opinion has been one of the most innovative and helped close the gap on AI over the past year after starting way behind Microsoft and OpenAI,” Wedbush Securities analyst Dan Ives said in a note to investors Tuesday before Google I/O kicked off. “We expect Sundar to address the future of Google’s AI strategy in his keynote later today and unveil innovations on AI coming to the Google and Gemini platforms for developers.”

Next up: Apple Inc. and its WWDC developers conference next month in Cupertino, Calif., where it will reportedly open its AI models to third-party apps for the first time, according to a Bloomberg report.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Experience at Qlik Connect 2025

TECHSTRONG AI PODCAST

SHARE THIS STORY