
Cloud-based real-time analytics company StarTree is set to introduce significant enhancements to its data platform, promising to reshape how enterprises leverage artificial intelligence (AI) and analytics.
The company’s announcement centers on two key innovations: Support for Model Context Protocol (MCP) and native vector auto-embedding capabilities, and a new deployment option called Bring Your Own Kubernetes (BYOK).
Bridging the Gap Between AI and Real-Time Data
At the core of StarTree’s announcement is a recognition that AI systems—particularly autonomous agents—require fresh, contextual data delivered at unprecedented speed and scale. “The next wave of AI innovation will be driven by real-time context—understanding what’s happening now,” says Kishore Gopalakrishna, co-founder and CEO of StarTree. “StarTree’s real-time analytics foundation perfectly complements where AI is going by delivering fresh insights at scale. What is changing is the shift from apps as the primary means of interaction for consumers to autonomous agents.”
This shift represents a fundamental change in data architecture requirements. While traditional data systems were designed for human users who could tolerate delays and stale data, agentic AI demands sub-second query speeds and an ability to support millions of autonomous agents working simultaneously.
Model Context Protocol Support
MCP, introduced by Anthropic in late 2024, provides a standardized way for AI models to connect with and interact with external data sources and tools. StarTree’s implementation, scheduled for June 2025, will enable AI agents to analyze live, structured enterprise data dynamically.
This capability is particularly valuable for handling extremely high levels of concurrency. As Chad Meley, SVP of Marketing and Developer Relations, explained during a press briefing, “If your data platform is using something like Snowflake or Databricks… [they’re limited to] 10,000 plus queries.” StarTree’s architecture, by contrast, can handle “tens of thousands of queries per second.”
Vector Auto-Embedding
The company’s vector auto-embedding feature, planned for release in fall 2025, will simplify and accelerate the generation and ingestion of vectors for real-time Retrieval Augmented Generation (RAG) use cases. This feature is designed with a pluggable architecture that can work with various embedding providers, such as Bedrock or OpenAI. The system will automatically regenerate embeddings when upstream data changes, eliminating the need for developers to trigger this process manually.
A key advantage of this approach is the removal of one component from the typical AI pipeline. “The way people do this in real time is by adding another component… in front of the database,” explained Chinmay Soman, head of Product. “This native integration means you no longer need that component… which saves cost and maintenance overheads.”
Deployment Flexibility With BYOK
StarTree also announced the general availability of Bring Your Own Kubernetes (BYOK), a deployment option that gives organizations complete control over StarTree’s analytics infrastructure within their own Kubernetes environments, whether in the cloud, on-premises or hybrid architectures.
This deployment model is designed for security-sensitive customers who require strict data controls. As Soman noted during the briefing, BYOK “comes with its trade-offs,” including customer responsibility for Kubernetes-level upgrades, but offers a good fit for “financial companies, credit card companies… big corporations where they’re very paranoid about security.”
Real-World Applications
StarTree highlighted several potential use cases for their AI-native analytics capabilities:
-
Agent-Facing Applications: Enabling AI agents to analyze real-time data to perform tasks like orchestrating food deliveries, monitoring cash flow for merchants, or managing system resources.
-
Conversational Querying: Supporting natural language interactions with data, allowing users to ask follow-up questions that build on previous queries without lengthy processing delays.
-
Real-Time RAG: Enabling timely RAG for use cases like financial market monitoring and system observability.
StarTree will showcase these innovations at their upcoming Real-Time Analytics Summit 2025, a virtual event taking place on May 14 featuring speakers from companies including Uber, Netflix and AWS.
Stephen Foskett, president of the Tech Field Day business unit at The Futurum Group, said, “I’ve followed StarTree’s development for years, and today’s announcements show they understand where the market is going. Many applications require near real-time integration of data into AI workflows, and this new capability leans into StarTree’s strength. Native vector embedding and BYOK deployment might seem unrelated at first, but together they solve the critical challenge of scaling real-time AI performance. Enterprises now have a way to collapse architectures and accelerate workflows — if they have the GPUs and platform engineering strength to handle it. Otherwise, they can deploy this in StarTree Cloud.”
As enterprises increasingly adopt AI agents for automated decision-making, the demand for systems that can deliver real-time insights at scale will only grow. StarTree’s announcement positions the company to address this emerging market at the intersection of AI and analytics infrastructure.