
Self-service is BI's holy grail. We’ve been chasing it for years through dashboards, spreadsheets, and now AI. But with each new interface, we face new challenges: data is precise, but AI isn’t a precision instrument.
Data is also contextual. LLMs are smart, but they lack context. Ask 10 companies how they define “revenue” or “customer,” and you’ll get 10 different answers.
AI can make it easier for anyone to access data, but it’s not magic. To make it work, it needs help: human judgment, organizational context, and clear feedback loops. A semantic model is essential because it creates a foundation of organizational context for AI to act on.
In the demo above, I walk you through how anyone can ask questions in natural language and contribute context and metadata back to the semantic model to improve the experience.
The model is the foundation, but the UI enables the feedback loop #
A chat interface on its own is powerful, but incomplete. Without a direct link to the BI interface, it’s like a compass with no map. You know the direction, but you can’t verify or refine where you’re going. A number without context isn’t helpful. Even for technical users, a raw SQL query can be hard to parse.
This is why we integrate the UI with the model in Omni. For generated results to be interpreted and trusted, users need to know what the AI filtered or how it calculated. If needed, they can refine – without knowing how to code. Every interaction improves the model and sharpens the system to create a constant loop.
Take the question: “How many users do we have?”
A marketer might know to clarify: Does that include all users or active users? How about employees? Partners? Test accounts? Knowing allows them to refine it to remove irrelevant accounts. By tightly coupling user experience with the model, the marketer can validate and update the definition, if needed. They’re creating a fast, effective feedback loop to improve their AI (and reporting to boot).
When you use a disconnected or headless semantic layer, business users get locked out. They can’t contribute context, and the model becomes outdated. Instead of fast refinement, you create a backlog of tickets to update the model. But when your model lives where people do their work, it evolves alongside your business.
Creating a shared vocabulary for humans & AI #
A well-designed semantic model helps translate from natural language to business context. In Omni, users can ask in their own words, then our semantic model translates the questions using the definitions and context your team has already shared. It’s teaching AI to speak your language.
People don’t say “show me total_revenue from order_transactions.” They say, “How much money are we making?” When someone submits a ticket to an analyst or selects from a drop-down, it’s straightforward to correct this. The context is already shared between the requestor and the data (or all too often it’s not, but there are explicit feedback loops to ensure a shared understanding).
But AI doesn’t know what it doesn’t know. It will give you an answer anyway – and it’ll be “confident” about that answer. And if users don’t have a way to inspect or validate that answer, they either make bad decisions or lose trust in the data and people stop using it. Neither gets you any closer to the promise of self-service.
The solution? Bring end users – humans – into the loop.
Design for AI like you design for your team #
Just as a new team member needs an explanation for which date field to use or the nuances of what to filter out, AI requires the same thoughtful curation and context.
Our semantic model makes it super easy for anyone to add metadata like “ai_context” and synonyms without writing code.
For example, you can add AI context that says: “When someone asks about sales, revenue, how much we’ve sold, etc., they’re asking about total revenue” or “total revenue also means total sales, revenue, total receipts, etc.”
It helps human-proof and teach the AI in a way every user can understand because it’s accessible from the UI and not written in code.
The better context you feed AI, the better your AI.
What the workflow looks like in Omni 👀 #
Someone asks: “Show me revenue by channel in 2024.”
It’s easy for anyone to see they’re on the right path – the labels are right there.

But, the first question is never the last.
Next, they add: “Only include customers in the US.”
The user shouldn’t need to know if the database stores a two or three-digit country code; they should be able to ask how they think, regardless of how the data is stored. It’s small, but for AI without context, it’s too much to ask. With Omni, it’s easy for anyone to provide that context to train the AI so it’s reusable for the rest of the business.

AI is great for quick lookups, but what about deeper questions?
So, they say: “Give me a readout to share with my team and call out anything they should pay attention to”
This is harder to get right, but it’s closer to what people actually need and how they ask for it. People want AI to help cut through noise – to tell them what matters. With Omni’s AI chatbot, even your least technical users aren’t blocked from asking for what they need. And, they can validate the response or easily follow a prompt from the UI to take the investigation further.

Creating a context flywheel for smarter AI #
We’re also making it easier to capture the collective knowledge of your team by creating a flywheel: users query → they discover important context → they feed it back to the AI → and they get better answers.
For example, when a marketing analyst updates “Stage Name” with a list of values (Qualification, Closed Won, etc.), that context can be promoted to the organization’s semantic model (of course, subject to permissions and governance natively supported by the platform).
Or, when a product manager generates a query that would be helpful for AI to reference, she can click to “Save as sample query to Topic.”
Every team knows the nuances of their roles, and access to Omni’s semantic model from the UI allows them to contribute to the collective business logic of the org, without needing to know how to code or model data. This accelerates your path to real self-service and lets AI learn as your business evolves.
Natural language requires thought. If a new employee landed in your BI tool and can’t determine which “revenue” field or date filter to use, chances are LLMs can’t either.
Omni helps you build a strong foundation and the loop by connecting AI to your business logic right in the UI so anyone can validate, use, and improve.
If you’d like to learn more about our semantic model for AI or hear use cases from our team and customers, I’ll be sharing more at Snowflake Summit and Databricks Data+AI Summit next month. And if you want to test it for yourself, we’d love to help you.