Templates that will make it simpler to invoke the open source LangChain toolkit for building artificial intelligence (AI) applications are on the way. Redis is hosting on its cloud service an OpenGPT project launched by LangChain, Inc. that enables organizations to build their own source editions of the models OpenAI has created for platforms such as ChatGPT.
As part of that effort, the two organizations are now building templates that can be reused to invoke models, control data retrieval and manage where data is stored on Redis Cloud, a managed service for accessing an in-memory Redis database designed specifically for real-time applications.
The templates will make it simpler for builders of AI models and applications to invoke the vector store, semantic cache, and conversational memory capabilities that the Redis database provides. There are also integrations with LangSmith, a tool for debugging, logging and monitoring AI applications.
IT teams will also be able to extend templates provided to address specific tasks, says Tim Hall, chief product officer for Redis. “They can be both reused and extended,” he says.
The models that drive generative AI applications need to run in-memory to enable the level of interactivity they require, notes Hall. As such, the Redis database provides a foundation for building AI models using data that can be stored in a cloud service, he adds.
The LangChain toolkit is already being widely used to build AI applications. The challenge now is finding ways to automate routine tasks that IT organizations encounter when using LangChain to build AI applications.
It’s still early days so far as building AI applications are concerned. In addition to mastering tools such as LangChain and platforms such as Redis, organizations need to define a set of best practices for integrating machine learning operations (MLOps), data engineering, DevOps and cybersecurity workflows to build and deploy AI applications consistently and safely. In fact, given the nature of those applications organizations should assume those processes will one day soon need to pass a rigorous audit that includes being able to demonstrate the provenance of the data used to train an AI model.
IT teams, of course, have been creating templates to automate workflows for decades. That concept is now being applied to invoking AI models that many organizations will soon be mixing and matching to address various use cases. There will also need to be processes in place that make it simpler to swap one AI model for another when required.
Every organization will naturally need to determine to what extent they can automate the building and deployment of AI applications, but the manual that process is the more likely it becomes bottlenecks will emerge. The difference, however, is the level of risk associated with deploying a flawed AI model is much higher than deploying a software component that has a known vulnerability that might one day possibly be exploited by a cybercriminal. In comparison, the AI model is likely being relied on to automate a process where even a simple mistake might have all kinds of downstream catastrophic consequences that could adversely impact organizations for potentially many years to come.