Google on Wednesday released its new flagship artificial intelligence (AI) model, an experimental version of Gemini 2.0, that is being positioned as a rival to DeepSeek and the latest from OpenAI.

Google says Gemini 2.0 Pro Experimental model “has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we’ve released so far.”

The model is packed with a 2 million token context window that “enables it to comprehensively analyze and understand vast amounts of information,” according to Google. What is more, its large context window lets the model analyze and reference vast information all at once, improving overall performance and assistance by including additional context.

On Wednesday, Google also announced the general release of “reasoning” model Gemini 2.0 Flash Thinking and the introduction of Gemini 2.0 Flash-Lite, a cost-efficient model that Google claims outperforms 1.5 Flash on most benchmarks. The models, designed to support developers and businesses, are available in Google AI Studio and Vertex AI.

“All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months,” Koray Kavukcuoglu, chief technology officer of Google DeepMind, wrote in a blog post. Neither DeepSeek’s R1 model nor OpenAI’s o3-mini model can accept multimodal inputs.

Gemini 2.0 Pro comes nearly a year after the company’s Gemini series of AI large language models (LLMs) stumbled out of the game with image generation snafus.

Since then, every major AI player has produced a newer, more-advanced generative AI model in a dizzying game of leapfrog that has led to LLM creators “commoditizing themselves” into lower and free tiers of their products, according to industry experts.

“The GenAI LLM market risks becoming oversaturated with announcements of the latest reasoning models, at a pace that most end users can’t keep up,” Mitch Ashley, vice president and practice lead, DevOps and Application Development, The Futurum Group, said in an email. “We can’t expect every user to become knowledgeable – more or less an expert – at each of the seemingly endless streams of new LLMs. Google’s done a good job integrating Gemini models into the user experience. The more seamless the user experience, the faster we will see adoption by broader market segments.”

Ashley added, “Soon, it will be the agents and automations they perform that will be sticky with customers,. Rebuilding a bevy of agents to switch models or AI companies will be more expense than it’s worth. That is until AI agents can modify other agents and convert them for you.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Cloud Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY