
Editorial note: This blog focuses on how we built agentic analytics, but if you want to learn how to use it, check out this post for example workflows.
In order to build AI our customers could trust, we needed to take a different approach than most AI systems. Text-to-SQL models look cool in a demo, but they rarely make the jump from demo to production. We needed to build something robust and ready for the enterprise.
Our early agent was primitive — it had access to basic tools like query and summarize. We learned from how customers used this agent via our MCP Server in apps like Cursor and Claude, and it revealed something important: people wanted to ask broad, open-ended questions about their data, just like they do with general LLMs. How is our pipeline looking this quarter? Why did revenue jump last month? How are customers using AI features?
These questions span multiple datasets, and they’re rarely answered with a single query. So an agent needs to think, plan, and act with a broader set of tools. That insight became the foundation for our agentic framework, built to tackle real analytical challenges while staying as reliable and governed as everything else in Omni.
Today, this allows our AI to go past a single query and decide what to do next — whether that’s continuing an analysis, comparing datasets, investigating data for the right matches, or summarizing what it’s learned. This also opens the door for lots more exciting features and workflows.
It took lots of work and rewiring, but the payoff is pretty exciting. Instead of stopping at “here’s your SQL,” our agentic AI can orchestrate a chain of analyses to surface insights that help you make decisions and act. The result gives our AI a bit more rope (or “agency” if you will 😉) to decide how to solve a problem on its own.
Here’s a look at some of the work that got us here:
Breaking past one-and-done AI #
When people ask a business question, the answer is rarely the result of a single query. We saw this firsthand when we started asking our AI tougher questions on our own data:
“What percent of our users are currently using our AI capabilities?”
With the way our data is structured, this requires us to write two queries: one to find the total number of users, and another to determine users who are specifically using our AI features.
“Why did our revenue increase last month?”
Any question starting with “Why?” can almost certainly send an analyst on a wild goose chase, requiring them to slice and dice data across many queries until they find a definitive answer.
“Pull a list of all accounts with active demos from the last quarter grouped by sales rep, then summarize the key trends for me to share with the exec team.”
These aren’t necessarily asks that require queries of multiple datasets, but they do require multiple interactions with the same dataset.
These are real questions we needed to answer, and none of them could be solved with a single query. We needed to make sure that we were evolving our product to support questions like these.
At this point, we’d already helped some of our customers successfully answer these kinds of questions using an LLM with our MCP server. The key was that the LLM was intelligently coordinating our AI. It could break down the user’s question into parts, run queries to answer each individually, then summarize the results. That insight spurred us to bring the same orchestration layer natively into Omni’s AI.
So we architected our AI to have a coordinator mechanism, which decides which tool to use next based on the question, the results, and what’s already been tried. It’s what lets the AI adapt mid-flight, retry when things break, or stop when it has something useful to show.

Once we’d built the coordination layer, Omni’s unique advantage became clear: our built-in semantic layer already enforces governance, context, and business logic by design. This is the difference between a generic LLM and an AI-native platform with built-in context. Our semantic layer acts as the intelligence backbone to teach the AI your specific business language: your metrics, definitions, and logic. Instead of guessing, the coordinator is making decisions based on a deep understanding of how your business works.
With this institutional knowledge, the agent already knows what “active customer” means, understands which fields drive revenue, and respects who’s allowed to see what. And since the semantic layer is visible to humans too, users can debug or improve the logic.
Another upside of this approach is resilience. Even when the underlying data isn’t perfectly curated, the agent can recognize when its assumptions didn’t hold and adjust its approach instead of returning an empty or incorrect result.
Our semantic layer makes agentic querying more traceable and accurate. It’s the reason our AI knows what counts as revenue, how to filter churned users, and what execs care about in a summary.

Building the AI toolshed #
Once we had the coordinator in place, we needed to organize its toolbox to help it operate as effectively as possible.
Topic picker + query tool
Inspects metadata across Topics, curated datasets that serve as the foundation of analysis in Omni, and makes an informed choice before generating SQL.
Why we built it: For questions that inevitably require you to span multiple datasets, the Topic picker helps the AI find the right combination of datasets to answer questions accurately.

Field values tool
This tool enables the agent to interpret your question, fetch the list of valid field values, and adjust to find the correct match.
Why we built it: Prior to our rework, a query could have returned no results because of an invalid filter based on a question like: “How many users are in New York?