An offshoot of the Linux Foundation and more than a dozen tech vendors – including Intel, Red Hat and VMware – are launching a project to accelerate open source innovation for the data and infrastructure needed to run rapidly evolving AI workloads.
The Open Platform for Enterprise AI (OPEA) is being run under the auspices of the LF AI & Data Foundation, which was founded in 2018 to support open source AI and data technologies in a vendor-neutral environment and to create a supporting and sustainable ecosystem.
OPEA’s platform includes a framework of composable building blocks for such generative AI systems as large language models (LLMs), data stores and prompt engines and assessment capabilities for grading generative AI systems for performance, trustworthiness and enterprise-grade readiness.
It also will put a focus on retrieval augmented generation (RAG), a method of allowing LLMs – which form the foundation of generative AI tools like OpenAI’s ChatGPT chatbot – to take in external data for training, which can include proprietary corporate data, an important point for enterprises as they adopt the emerging technology.
RAG a Central Part of Plan
RAG is a key talking point for IT vendors as they bring new capabilities to the market and was a focus for Intel CEO Pat Gelsinger during his keynote address at the recent Intel Vision 2024 event. Gelsinger called RAG the “perfect bridge between new LLMs and the big data repositories today,” enabling organizations to retrain LLMs with their own real-time data. Enterprises don’t have to rely entirely on pre-trained models or trust their sensitive data with other LLMs from vendors like OpenAI or Google.
OPEA will offer “an open-source standardized modular and heterogeneous RAG pipelines for enterprises with a focus on open model development, hardened and optimized support of various compilers and toolchains,” Ibrahim Haddad, executive director of LF AI & Data, said in a statement.
The Linux Foundation said the new project is coming at a time when generative AI projects, particularly those using RAG, are growing in popularity and as the rapid innovation of the technology has led to a fragmentation of tools, techniques, and offerings. OPEA will work with IT organizations to standardize components, such as frameworks, architecture blueprints and reference solutions that will demonstrate how these offerings will perform, interoperate and enhance what enterprises’ can do with generative AI.
“The power of RAG is undeniable, and its integration into gen AI creates a ballast of truth that enables businesses to confidently tap into their data and use it to grow their business,” said Michael Gilfix, chief product and engineering officer at KX, a vector database company and one of the organizations signing on the OPEA project.
OPEA’s has architectural blueprints for RAG component stack structure and workflows that includes LLMs, data stores and processing, guardrails and prompt engines.
Open Source and Enterprise AI
“While lots of vendors and analysts refer to RAG in terms of augmenting an LLM with unstructured data, we believe that – for RAG to be truly effective – it must also augment the LLM with real-time data from your enterprise systems, on demand,” Ronen Schwartz, CEO of data platform provider K2View, wrote in a blog post.
Kiernan Mcquikin, product marketing manager for market intelligence and search platform firm AlphaSense, wrote last month that RAG is an important component as the open source community embraces generative AI.
“Open source … hasn’t caught up fully to closed frontier models in published performance benchmarks,” Mcquilkin wrote in a blog post. “However, it’s clearly leap-frogged closed models on the set of tradeoffs that any developer has to make when bringing a product into the real world.”
He added that open source software and consumer-grade generative AI tools that are trained on the internet may have larger datasets to train from, “but this does not make them more intelligent or reliable – especially if RAG is not involved. For many tasks in your enterprise or investment context, your unique data assets and understanding of the problem offer a leg up on closed models.”
Intel: More is Coming
In its own statement, Intel wrote that the “composable framework is only the first part of what OPEA brings to developers. It sits beside an assessment framework that Intel has already made available in the OPEA GitHub [repository]. This allows for an agreed upon grading and evaluation system of GenAI flows against vectors such as performance, trustworthiness, scalability and resilience, ensuring they are indeed enterprise ready.”
Intel also put a set of reference architectures in the GitHub repository, including for a chatbot powered by Intel’s Xeon 6 CPUs and Gaudi 2 AI accelerators, document summarization and visual question answering for Gaudi 2, and a copilot design for generating code in Visual Studio Code on the accelerator.
Other vendors signing onto OPEA include Cloudera, Domino Data Lab, Hugging Face, MariaDB Foundation and SAS.