AI News

As powerful as generative artificial intelligence (AI) has become, organizations are going to need to find ways to invoke it within the context of either an existing business process or one that never previously existed. Avaamo, to address that issue, today launched a low-code framework dubbed Avaamo LLaMB that promises to make it simpler for either professional or citizen developers to build applications for specific use cases.

Avaamo has already built generative AI applications for use case involving call centers, human resources, IT, procurement and patient care. The Avaamo LLaMB makes it possible to either build additional applications from scratch or more easily customize the ones the company provides. Organizations already using Avaamo applications include Volkswagen, Penske, UCHealth, Ericsson and Duke Health.

It makes use of a multimodal architecture to invoke various types of models, including large language model (LLM) services provided by Microsoft and Amazon Web Services (AWS).

Rather than having to create and store libraries of prompt, the Avaamo LLaMB makes it possible to create applications that depending in the use case safely invoke the AI model best suited because no data is ever allowed to be retained by the provider of AI model, says Avaamo CEO Ram Menon.

There is also a data moderator tool that scans for anomalies indicative of low-quality responses from AI models and then provides suggestions for how to generate more accurate outputs. That’s critical because generative AI platforms are always going to be prone to hallucinations, notes Menon. “LLMs are designed to provide probabilistic answers,” he says.

Finally, Avaamo LLaMB makes it possible to invoke more than 1,000 connectors to existing system of record applications that organizations rely on to make multiple processes, he adds.

Collectively, Avaamo LLaMB as opposed to providing a chemistry set in the form of AI models and application programming interfaces (APIs) makes it possible to harness AI models in a way that a business can operationalize, says Menon.

It may be a while before organizations fully harness AI to drive the next wave of automation. Most are still evaluating how best to apply AI across a wide range of processes that require high degrees of accuracy. It’s one thing to employ a general-purpose LLM such as ChatGPT to draft an email or summarize a report. It can be quite another thing to rely on generative AI to drive a process that requires 100% accuracy.

Each organization will need to determine how and when to apply various types of AI models, but Avaamo is betting that a higher level of low-code abstraction will be required to ultimately weave AI models together. Organizations are going to have intelligent AI agents for specific tasks that will be combined as needed to drive a workflow in ways that are much more sophisticated than legacy robotic process automation (RPA) platforms, notes Menon.

Regardless of approach, there’s clearly a need to master AI quickly to remain competitive. The challenge, as always, is finding the best way to harness those capabilities in a way an organization can practically consume.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY