
More than two years after it shocked the tech world with the debut of GPT-4, OpenAI on Thursday released its long-gestating follow-up artificial intelligence (AI) model.
OpenAI’s GPT-4.5, code-named Orion, may lack the impact of its predecessor, but delivers a “more natural” experience, company executives said.
“What sets the model apart is its ability to engage in warm, intuitive, naturally flowing conversations, and we think it has a stronger understanding of what users mean when they ask for something,” Mia Glasese, vice president of research at OpenAI, told the New York Times.
Though OpenAI says Orion’s heftier size gives it “a deeper world knowledge” and “higher emotional intelligence”, in several benchmarks it fell short of newer “reasoning” models from Chinese AI startup DeepSeek and Anthropic.
OpenAI also acknowledged in a white paper that it does not consider GPT-4.5 a frontier model. For instance, GPT -4.5 supports file and image uploads and ChatGPT’s canvas tool, but does not support ChatGPT’s two-way voice mode.
Still, Orion represents OpenAI’s largest model and is trained using more computing power and data than any of the company’s previous releases. ChatGPT Pro subscribers, who pay $200 monthly, already have access to GPT-4.5 in ChatGPT as part of a research preview. Developers on paid tiers of OpenAI’s API also are able to use GPT-4.5.
Just as significant as being a sequel to GPT-4, the Orion large language model (LLM) is considered a bridge to a new era of chatbots in store from OpenAI. The next generation will be capable of chain-of-thought reasoning in which the bot will spend more time thinking about a question before answering in much the manner of a human.
Last fall, OpenAI unveiled OpenAI o1 technology capable of reasoning math, science, and coding tasks in a wider effort to expand the ability of AI to maneuver tasks and give it an edge over rivals DeepSeek, Alphabet Inc.’s Google and Meta Platforms Inc.
Reasoning models could eventually usher in an age where computer programmers code with AI. But, as with other AI models, they can be prone to making mistakes, or hallucinating.