By 2030 our lives could be very different. The way we work and learn is set to be revolutionized by artificial intelligence (AI) also known as machine learning, with the technology predicted to add $15.7 trillion to the global economy.
The new AI taking the world by storm is OpenAI’s language processing tool, ChatGPT, a generative AI with constantly evolving features. It’s quickly become the hottest tool in the technical community. When combined with other technologies, ChatGPT has the potential to be extremely powerful. Thomas Kunnumpurath, vice president of systems engineering for America at Solace, highlights how the combination of EDA (Event-Driven Architecture) and ChatGPT can help organizations stay on top of business and ahead of the competition.
ChatGPT is becoming more prominent in our everyday lives. It can help to quickly draft a blog or an email and even provide inspirational ideas for your next article. It’s of no surprise that it rapidly became the fastest-growing consumer application ever, and just two months after its launch, ChatGPT attained 100 million active monthly users throughout January according to a USB report.
EDA Takes ChatGPT to New Heights
For all their worth, ChatGPT and AI have some limitations and constraints holding them back from reaching their full potential. But with support from EDA, they can excel. EDA provides the link for the exchange of data between the software that “publishes” events and the systems that have shown interest in the data by “subscribing” to topics. Applications will become more reactive to events when they are configured with EDA by grouping internal features. With the additional layer of EDA, requests can be dealt with and serviced, helping to reduce energy consumption, increase responsiveness and provide the potential for B2B and B2C businesses to use them in new e-commerce opportunities.
ChatGPT Achieves the Next Level With EDA – 5 Reasons Why
- Gain real-time responses with automated answers
Today ChatGPT operates in a “request/reply” way. Ask and ye shall receive, you might say. So now imagine if ChatGPT could proactively send you something it knows you’d be interested in.
For example, say you use ChatGPT to summarize and note action items from a Zoom meeting with a dozen participants. Instead of each participant raising a query, EDA would allow ChatGPT to send the notes to all attendees at the same time, including those who missed the meeting. Everyone would be automatically and instantly up-to-date on meeting outcomes, requiring significantly less load on ChatGPT since it proactively sends one message to a dozen recipients instead of satisfying a bunch of request/reply interactions over time, thereby improving service levels for users.
Any group activity needing the same suggestions, facilitated by ChatGPT, can benefit from this capability. For instance, teams working jointly on a codebase. Rather than ChatGPT suggesting changes/improvements to every developer in their IDE, users would have the IDE “subscribe” to suggestions and then the underlying EDA technology would be able to push it out to all subscribed developers when they launch the codebase.
- An energy efficient future for ChatGPT
ChatGPT is very resource-intensive, therefore expensive, from a processing/CPU perspective, and requires special chips called graphical processing units (GPUs). And it uses quite a lot of them. The extensive GPU workload (now estimated to be upwards of 28,936) required to train the ChatGPT model and process user queries incurs significant costs, estimated to be between $0.11 to $0.36 per query.
And let’s not overlook the environmental costs of the model. The high power consumption of GPUs contributes to energy waste, with reports from data scientists estimating ChatGPT’s daily carbon footprint to be 23.04 kgCO2e, which matches other large language models such as BLOOM.
However, the report explains “the estimate of ChatGPT’s daily carbon footprint could be too high if OpenAI’s engineers have found some smart ways to handle all the requests more efficiently.” So, there is clearly room for improvement on carbon output.
- Avoid ChatGPT outages by staggering requests
ChatGPT needs to handle a high volume of incoming requests from users. The popularity, rapid growth, and unpredictability of ChatGPT means it is frequently overwhelmed as it struggles to keep up with demand that can be extremely volatile and what we call ‘bursty’. These recent ChatGPT outages indicate how saturated the system is becoming as it struggles to rapidly scale-up to meet its ever-increasing traffic and compete with new rivals such as Google Bard. So where does EDA come in?
In the event of a ChatGPT overload, implementing EDA can buffer requests and service them asynchronously across multiple event-driven microservices as the ChatGPT service becomes available.
With decoupled services, if one service fails, it does not cause the others to fail.
The event broker, a key component of event-driven architecture, is a stateful intermediary that acts as a buffer, storing events and delivering them when the service comes back online. Because of this, service instances can be quickly added to scale because it doesn’t result in downtime for the whole system — thus, availability and scalability are improved.
- Get to know your customers better – as ChatGPT finds a home in business operations
AI plays a critical role in the e-commerce marketplace. In fact, it is projected the e-commerce AI market will reach $45.72 billion by 2032. So, it’s no surprise leading e-commerce players are trying to figure out how to integrate ChatGPT into their business operations. Shopify for instance, has developed a shopping assistant with ChatGPT, capable of recommending products to users by analyzing their search engine queries.
EDA has the potential to enhance the shopping experience even further and help B2C and B2B businesses learn more about their customers by tracking key events at high volume from e-commerce platforms to help businesses understand patterns in customer behavior, such as what items are the most profitable in certain regions and what factors influence purchasing decisions. This information can be then sent to a datastore for the ChatGPT machine learning model to predict customer behavior and make personalized product recommendations.
- Increase ChatGPT reaction time with an event mesh
Implementing an event mesh is the answer for ChatGPT and ChatGPT applications looking to deal with their global user base. Capable of managing demand and efficiently distributing data, it’s a match made in heaven.
Give users an improved experience and save on computer/network resources by adding an event mesh. An event mesh is made of an army of event brokers forming an architecture layer, allowing for events happening in one application to be transferred to any other application. So, instead of having application logic filter your ChatGPT results to relevant applications, data can be sent on-demand directly to relevant subscribers.
EDA Opens the Door for ChatGPT to Reach its Full Potential
ChatGPT will only get bigger and more powerful with an ever-growing user base and a constant stream of new features – despite it still being in its infancy. With the addition of EDA to help combat its increasing energy consumption and reduce the number of outages, and its ability to streamline answers, the potential it could bring to B2B and B2C businesses is like no other.