Edge AI applications are all the rage. The problem is, everyone seems to have got the meaning slightly wrong.

When combined with applications, AI becomes much more complex, and as a result, often, misunderstood. Some broadly describe applications that use AI as “AI applications” and put them in a bucket of their own. Others argue that AI is only a bolted-on element, and define the apps more specifically as “AI-powered applications”.

A pet peeve of Guy Currier, CTO of Visible Impact, a Futurum Group unit that handles the research firm’s B2B marketing, is this very familiar catchphrase, “AI applications”.

“There is no such thing as an “AI application”,” he remarks. “There are applications that contain AI services, and what you are developing when you’re developing AI is a service that goes into the application.” he explained while giving a talk on this at the Edge Field Day event back in September.

“The usual example I give is what everyone thinks, which is a chat application. [A chat application] uses AI to take prompts and provide responses,” he added.

This tiny nuance blurs understanding and, frequently mess up the approach to AI adoption for many enterprises.

“We tend to think about it pretty simplistically – that we are getting the talent and the developers and the integrators and so forth, to either create or adopt. But actually, that’s really only the case for the foundational models – and that’s where I think the first error in thinking starts to happen,” he notes.

Instead, thinking of AI development in terms of lifecycles, Currier argues, leads to better understanding.

 

There are, broadly speaking, three distinct lifecycles that warrant their individual adoption strategy. Not every AI development project requires a ludicrous amount of infrastructure, stacks of expensive GPUs, and a village of talent to take off, Currier reminds. Only development of the initial foundational models takes the most amount of resource, and most times, it is optional, he says.

“Whether they’re LLMs like GPT and Llama, or computer vision or image-oriented ones like SAM or Dall-E, that is the start of your own adoption and development of AI, and normally someone else is doing it. Absolutely you create your own foundational models but that is a major investment of time, treasure and talent,” he says.

Instead, if organizations skip to the second phase which is model tuning where a foundational model developed by another company is used and incorporated into the applications, it brings into focus the real process of taming the model to make it do all that things one wants.

“This is where a whole lot of focus is right now,” says Currier. “Every SaaS vendor out there has incorporated some kind of foundational model and then tuned it based on whatever service it is they’re providing.”

Curiously, the process of tuning a model is akin to application development in terms of its requirement of infrastructure, tooling and development methodologies. The big difference is the output which in this context is an AI service.

“It’s a similar development cycle and that means that you’re going to develop, put it into a pipeline, release, test, and have that same loop back into sandboxing or further development of it. You can use any kind of development model you like – Waterfall or Agile; and you can use DevOps and all that sort of thing.”

The third element in the pipeline that Currier touched on in his talk and is quite the buzz right now is RAG or retrieval augmented generation. A neat prompt augmenting trick that has become hugely popular among AI companies, RAG allows users to encode internal datasets and what is known as “tribal knowledge” into the database directing the application to focus only on those data points when responding to a question. This allows the model to answer more faithfully and more contextually.

Thomson Reuters taps into this, by adopting RAG and interweaving it with a human-centric approach, that gives the conglomerate’s legal solutions the seal of trust.

“The lawyers can ask for briefs or letters or any legal documents to be drafted using Reuters’ proprietary case data,” Currier says.

No matter where the AI journey begins for organizations, looking at these three lifecycles as standalone paradigms deserving their own set of resources and adoption strategies is vital to the success of any AI endeavor.

“That first step into AI may simply be on the left but ultimately the goal we’re trying to get to is a model that works, that’s tuned to the specifics of the application and the business scenario, and that can be responsive and contextual – and all three of those need to be addressed,” concludes Currier.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Cloud Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY