OpenAI ChatGPT

OpenAI is ramping up capabilities in its ChatGPT chatbot with the introduction of GPT-4o, its latest model that expands access to GPT-4 to everyone and adds voice into the mix; as it looks to make the flagship generative AI model more attractive to users and developers and to stay ahead in a ferociously competitive market.

The company ended weeks of speculation unveiling GPT4o – the “o” stands for “omni,” noting the model’s new voice capabilities being added to existing text and vision features – at a livestreamed event from OpenAI’s headquarters in San Francisco on Monday.

“This is the first time that we are really making a huge step forward when it comes to the ease of use,” OpenAI CTO Mira Murati said during the event. “This is incredibly important, because we’re looking at the future of interaction between ourselves and machines. We think that GPT-4o is really shifting that paradigm in the future of collaboration.”

ChatGPT is seeing its capabilities around text, images and audio vastly improve. Where the chatbot before has some voice features that could be used with text and images, GPT-4o gives ChatGPT a feel that is more like that of a digital assistant. In demonstrations, OpenAI researchers Mark Chen and Barrett Zoph showed a much more natural interaction with ChatGPT with the new model, asking it questions, having it tell a bedtime story – and having it change voices while doing so – interrupting it in mid-sentence, showing it read facial expressions, and being used as a translator facilitating a discussion in both Italian and English.

It also can view an image and answer questions related to it. It can view a photo of a menu in one language and translate it into another when asked or give information and recommends regarding the food, OpenAI wrote in a blog post.

They also demonstrated OpenAI writing code and solving math problems, putting in close competition for the developer community with Microsoft’s GitHub Copilot.

A Fast-Growing Market

Such competition is only heating up, with big-name vendors like Google, Amazon and Meta also rushing to grow their generative AI capabilities, as are smaller pure-play companies like Anthropic. Microsoft has invested more than $10 billion in OpenAI and uses its technology across its portfolio while also developing its own capabilities.

Given the rapid growth of the market, the expanding competitive field isn’t surprising. Analysts with market research firm Statista are expecting the generative AI market to grow from $66.62 billion this year to almost $207 billion by 2030.

OpenAI’s new model is faster than its predecessors, responding to audio inputs in a little as 232 milliseconds and averaging 320 millisecond responses, which the company said is “similar” to human response time in conversations.

“It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API,” OpenAI wrote. “GPT-4o is especially better at vision and audio understanding compared to existing models.”

The company noted that with GPT-4, users could use Voice Mode to talk with ChatGPT with a latency of 5.4 seconds. Voice Mode essentially brings together three separate models, including a simple model that transcribes audio to text and GPT-3.5 or GPT-4 that takes in and puts out text. The third is another simple model that converts the text back to audio.

This limits what GPT-4 can do. It loses information because it can’t observe tone, multiple speakers, or background noises, for example.

“With GPT-4o, we trained a single new model end-to-end across text, vision and audio, meaning that all inputs and outputs are processed by the same neural network,” OpenAI wrote. “Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.”

The plan is to launch an alpha version of a new Voice Mode with more capabilities soon, according to the company.

OpenAI also improved ChatGPT’s performance in 50 languages.

GPT-4 for Everybody

With GPT-4o, GPT-4 becomes available to all users, including those using OpenAI’s free service. Murati noted that GPT-4o, in the company’s API, is twice as fast as GPT-4 and comes in at half the price.

“We know that these models get more and more complex, but we want the experience of interaction to actually become more natural, easy, and for you not to focus on the UI at all, but just focus on the collaboration with ChatGPT,” she said when introducing a new ChatGPT UI.

OpenAI is beginning to roll out GPT-4o to ChatGPT Plus and Teams users. Availability to Enterprise users will come soon. ChatGPT Free users also will begin to get access, though with limits on use. ChatGPT Plus users will have five times the message limit than Free users, with Team and Enterprise users have higher limits.

However, Murati said features will be released iteratively as they get continue testing for safety, particularly for the new audio capabilities.

“Developers can … now access GPT-4o in the API as a text and vision model,” OpenAI wrote. “We plan to launch support for GPT-4o’s new audio and video capabilities to a small group of trusted partners in the API in the coming weeks.”

Along with GPT-4o and the enhanced UI, OpenAI also released a ChatGPT desktop version for macOS systems.