New Relic is integrating more tightly with Nvidia. The system-level observability company whose tools provide monitoring services across applications, cloud-based components and infrastructure is aligning closer with the graphical processing unit (GPU) giant to provide insight into AI applications built on Nvidia NIM.

Seldom detailed in terms of its mechanics, Nvidia NIM (Nvidia Inference Microservices) is a set of optimized cloud-native microservices and self-hosted models designed to simplify the deployment of generative AI models by abstracting away the complexities of AI model development ‌using industry-standard APIs.

Software developers can now use New Relic AI monitoring (which we can think of as APM for AI) to get visibility across an AI stack for applications built with Nvidia NIM with a simplified and secure setup process. New Relic’s platform centralizes data from more than 60 AI integrations to provide this level of observability. 

Why that matters comes down to a number of key factors.

Why We ‘Observe’ AI

We need monitoring controls into AI model deployments to look for performance and quality issues like bias, toxicity and hallucinations by tracing the entire lifecycle of AI response. New Relic does this by what it calls “deep trace insights” for every response. We also need to be able to isolate model-related performance, error and cost issues by tracking key metrics across all NIM inference microservices in one place. New Relic does this with its model inventory technology.

“In today’s hyper-competitive market, organizations cannot afford to wait years for AI ROI,” said New Relic CEO Ashan Willy. “Observability solves this by providing visibility across the AI stack. We are pioneering AI observability by extending our platform to include AI apps built with Nvidia NIM. Combining Nvidia’s AI technology with our expertise in observability and APM gives enterprises a competitive edge in the AI race.”

No One-Click AI

Although we might see application vendors promising one-click AI functions with all the glossy corporate sizzle of gourmet sausage, the software engineering function will be more interested in understanding chefs’ skills, kitchen hygiene and the provenance of our meat. As such, New Relic’s work with Nvidia sees it analyze critical accelerated computing metrics such as GPU utilization, temperature and performance states in order to understand context and resolve problems faster.  

Providing an extra layer of security sauce, in addition to Nvidia’s own self-hosted model’s security functions, New Relic allows teams to exclude monitoring of sensitive data and personally identifiable information (PII) in AI requests and responses. New Relic’s monitoring provides a view of the AI stack, along with key metrics on throughput, latency and costs while ensuring data privacy. It also traces the request flows across services and models to understand the inner workings of AI apps.

AI Model Selection Pack

This news sees New Relic extend its in-depth monitoring to Nvidia NIM, supporting a range of AI models including Databricks DBRX, Google’s Gemma, Meta’s Llama 3, Microsoft’s Phi-3, Mistral Large and Mixtral 8x22B and Snowflake’s Arctic. 

This integration follows New Relic’s recent addition to Nvidia’s AIOps partner ecosystem. Using Nvidia AI’s accelerated computing, New Relic combines observability and AI to streamline IT operations and accelerate innovation through its machine learning and generative AI assistant, New Relic AI. New Relic AI monitoring is available as part of its all-in-one observability platform and offered via its usage-based pricing model.