January 30, 2026

5 Key Takeaways from Our Panel on Breaking the Metadata Bottleneck for Contextual AI Insights

Why most “talk to your data” initiatives stall — and what it actually takes to break the metadata bottleneck and deliver production-grade, trustworthy AI analytics.

 Tobi Beck

Tobi Beck

A cover picture with the title 5 Key Takeaways from Our Panel on Breaking the Metadata Bottleneck for Contextual AI Insights and a funnel image with different data sources on the right.

Yesterday, Solutions Review hosted a Solutions Spotlight featuring Promethium, with insights from Kevin Petrie (VP of Research, BARC) and Prat Moghe (CEO, Promethium). The conversation focused on why so many “talk to your data” initiatives stall and what it actually takes to get to production-grade results.

Here are the five biggest takeaways.

 

1. Conversational Agents Are Finally Here — but Production-Grade Analytics Still Isn’t

Enterprises have moved quickly from demos to deploying agents, yet analytics remains one of the hardest bars to clear. As BARC research shared during the panel showed, while many organizations now have agents in production, only a fraction are using agentic AI successfully for analytics.

The reason is simple: analytics requires precision, explainability, and trust. “Pretty good” answers aren’t good enough when real business decisions are on the line.

 

2. Context Engineering Is the Real Bottleneck

The panel made it clear that natural-language-to-SQL is no longer the hard part. The real challenge is translating business meaning into data logic at scale:

  • What does “revenue” mean here – gross, net, recognized, booked?
  • Which business unit, region, product line, and time period?
  • What adjustments, exclusions, or policies apply?

Until organizations can operationalize this business context, accuracy plateaus – no matter how good the model is.

 

3. Metadata Isn’t Missing — It’s Scattered and Underutilized

The panel reinforced a key reality: most enterprises already have a lot of metadata, definitions, dashboards, and “tribal knowledge.”

The problem is that it’s spread across:

  • Data platforms and warehouses
  • BI tools and semantic models
  • Catalogs and glossaries
  • Documents, tickets, and query history
  • Governance systems and access policies

Context engineering is largely the work of finding, reconciling, and applying this context dynamically at the moment a question is asked.

 

4. Governance Must Be Embedded, Not Bolted On

As conversational analytics moves toward self-service, governance can’t remain a manual or after-the-fact process.

The panel emphasized extending governance beyond data to include:

  • Models and hallucination risk
  • Agents and tool usage
  • End-to-end traceability and explainability

The more autonomy you give users, the more governance must be automated, contextual, and enforced by design.

 

5. Solving Conversational Data Analytics Requires Rewiring the Data Stack — Not Replacing It

A key insight from the discussion was that conversational data analytics isn’t a single-layer problem.

To work in production, enterprises must solve three challenges at once:

  • Agent engineering: enabling personalization, intent-awareness, and user-specific experiences at scale
  • Context engineering: operationalizing business definitions, policies, lineage, and institutional knowledge
  • Data engineering: accessing distributed, live data securely without brittle pipelines or massive data movement

Treating any one of these in isolation leads to failure. Production-grade conversational analytics emerges only when all three are addressed together — and connected across the existing data stack.

 

Final Thoughts

The panel’s message was simple: the winners won’t be the teams with the flashiest agent demos.

They’ll be the teams that invest in context engineering: bridging definitions, policies, lineage, and user intent so AI can deliver trustworthy answers in the real world.

Because scaling agentic analytics isn’t just a model problem.

Click below to watch the full conversation. If you want to learn more about how Promethium solves these challenges to enable AI-driven insights at enterprise scale, reach out to our team to schedule a demo and learn more.

 

 

Full Transcript of the Webinar

Solutions Spotlight: Breaking the Metadata Bottleneck to Drive Contextual AI Insights

Doug Atkinson, President, Solutions Review: Broadcasting from our New England studio, Solutions Review is proud to showcase Promethium and the Solutions Spotlight, an inside look at enterprise technologies.

I’m Doug Atkinson here at Solutions Review and welcome to the Solutions Spotlight featuring Promethium and focused on breaking the metadata bottleneck to drive contextual AI insights.

Most organizations have experimented with conversational agents that promise insights on demand. But they struggle to achieve production-grade results. The reason is an inability to integrate metadata at scale for real-time contextual answers to business questions. So today we will examine three structural challenges behind that bottleneck: distributed data sets, heterogeneous environments, and rising expectations for true self-service analytics.

To drive the discussion, we have invited Prat Moghe, the Chief Executive Officer of Promethium, and Kevin Petrie, Vice President of Research and Head of Data Management Practice at the enterprise analyst firm BARC. Kevin, Prat, thanks for being with us.

Kevin Petrie, VP of Research, BARC: Great to be here.

Prat Moghe, CEO, Promethium: Thank you, Doug. Great to be here.

Doug Atkinson: Hi, Kevin. I’m glad you’re here. It’s nice to have a CEO and a high-end analyst in the spotlight today. So I’m going to get out of the way, Kevin, and I’m going to hand it over to you. I know you’re going to walk through a little bit of a presentation, and you’re going to have a nice conversation with Prat. So I’ll hand it over to you. When you’re wrapping up, we’ll also be able to get to some Q&A and I’ll catch up with you on the other side.

Kevin Petrie: Doug, thanks very much. Really pleased to have the opportunity to speak with folks today. Metadata, AI, semantic layer are not new concepts. We’re hearing a lot more about AI these days. I think the good news in terms of where we are with the AI adoption curve is that there’s growing high-level recognition—long awaited—that data clean inputs, well-organized inputs, have all kinds of implications and importance for the outcomes with AI innovation. And so we’re at a really interesting point in the adoption curve. Organizations that are getting this right can really empower all kinds of decision makers to interact with their data and use AI in pretty exciting ways.

I want to start by talking about the promise—a very specific promise in this context—which is that conversational agents taking advantage of GenAI-powered chatbots promise self-service analytics at last. The notion of data democratization and self-service analytics has been around for quite some time, and there’s been a desire to use automation to lower the skill set that’s required, lower the barriers that are required for operational decision makers and strategic decision makers throughout the organization, so they can have at their fingertips the data they need in order to make better decisions.

So the promise here is that using these conversational agents, you can enable self-service exploration for faster decisions. You can translate complex data sets into clear and actionable business insights. We’ll go through some examples. And you can reduce the reliance on specialists—those overwhelmed data analysts within BI teams, those overwhelmed data engineers within data teams—by reducing the level of prep work and governance work that’s required of them because you’re automating a lot of these analytical interactions.

The reality is that we’re not quite there yet. It shouldn’t be a surprise—the notion of an agent was not mainstream even 16 to 18 months ago. Organizations have come a long way towards using agents. Our BARC research is showing that as of the end of 2025, about half—50%—of organizations had agents in production, either limited or full production. We also found that only 27% of them are using agentic AI for analytics specifically. So there is a gap, and that’s despite the fact that most organizations—nearly all—have BI teams in place. So agents are available, agentic tools are available within BI processes and so forth, but they’re not using them.

And the reason is that organizations are struggling with diverse, opaque data sets, with complex hybrid and multi-cloud environments, and with perhaps most importantly, bottlenecked metadata.

Some supporting points here: Data quality has vaulted from number five or number eight—depending on how you measure it—among lists of obstacles to AI success back in early 2024, to number one now. So that’s a problem. Data quality is a big issue here. Most organizations have hybrid environments. Many of them that have one cloud also use another cloud. And so data is highly distributed. In fact, we’re finding that nearly half of AI projects rely on data inputs that are in more than one location—47% to be specific. So that contributes to a lot of this complexity. And the metadata is definitely something that holds rich potential to organize those assets, but it’s so distributed and complex in itself that it can contribute to a bottleneck that prevents success with agentic analytics.

Types of Metadata

So what do we mean by metadata? We’ve got technical metadata, which refers to the schemas, formats, data versions, storage or processing requirements—technical aspects of data sets or data products.

Business metadata takes it up a level. It describes these data assets in terms of the products, whether they’re certified as products to be usable throughout the enterprise. It might have glossary terms. Metadata also might describe ontological relationships between concepts or business entities. It can help classify data and include key performance indicators.

Governance metadata is absolutely critical here. This is the metadata that helps organizations build and enforce policies and specific technical rules to reduce risk across their environments. So this might be the quality scores of specific data products or more rudimentary data assets. It will include policies themselves, rules such as privacy rules, access rights, rules associated with AI/ML models.

Finally, we have our operational metadata, which is lineage—looking at the end-to-end relationships, ideally from a data source to an intermediate tool to an end user, maybe using an AI/ML model along the way. What are those relationships? Operational data also includes usage patterns—the specific type of role of a data consumer and what’s their intent. This is critical. This is something Prat’s going to talk about. What does that user want?

So these are all very rich levels of intelligence about data that can help you organize diverse assets across your environment in order to provide the right business context to analysts.

Fantastic. But it’s hard. There’s a lot of it. And you have this proliferation of users, projects, data objects—from structured, unstructured, and semi-structured. A proliferation of platforms. Most organizations that use Databricks also use Snowflake. They also have stuff on-prem. A proliferation of data centers. A proliferation of clouds. So it’s kind of a mess right now.

The Semantic Layer

A comprehensive semantic layer, while not a new concept, is becoming increasingly important. And it’s absolutely critical for organizations to get this piece right because the semantic layer can unify your technical, your business, your governance, your operational metadata. It can map all these distributed data sources to consistent business entities, to metrics or KPIs, so that business people can understand when they’re getting an output from a data source that they’re talking about just the right customer, the right LLC, that they’re talking about just the right measure of revenue, and just the right measure of revenue per rep—all these metrics that really matter to the business.

The semantic layer also needs to encode those relationships between entities and data sets to trace the lineage end to end. So that if you have a model, a recommendation engine, or in this case the conversational agent that makes a mistake, you want to understand—tracing back—what was the input going into that LLM? What was the RAG process that supported that? And then you can start to fix those and avoid issues in the future.

You also want to ensure that you’re gathering the right logic so that the context is machine-readable. The semantic layer can help translate natural language queries into governed, optimized data requests. And it’s going to ideally serve curated and context-rich outputs directly to conversational agents and to the humans that use them.

And this is much easier said than done, which is why I’m very excited to have Prat here. We’re going to be asking him some questions shortly. Because if you picture what’s happening with a decision maker—such as a business analyst leading analytical efforts for a sales team—they need to understand with context which customer they’re talking about, which region, what the goal is, what are all the different contributors and factors to business performance as they explore that topic.

Use Cases for Conversational Agents

If you get it right, conversational agents can supercharge a lot of AI use cases. This is again some fresh data published by my colleagues Sean Rogers and Merv Adrian in December about overall use case adoption. And you can see these use cases in terms of price optimization, sales forecasting, inventory management, supply chain management, and optimizing energy consumption. All those benefit hugely from conversational agents because you’ve got a smart human interacting with data in an intelligent, ad hoc way in order to—often in a real-time fashion—make smarter decisions that have better business context.

So with price optimization, you might be recommending dynamic pricing based on demand, cost, elasticity. Inventory management is critical, and supply chain optimization is critical. We’re learning from Davos this week that supply chains, I think, will remain somewhat in flux because there’s a lot of geopolitical uncertainty out there. A lot of countries are still figuring out how to trade with one another. And so that just underscores the need for agile, contextual insights into your business. Energy consumption is becoming more important. We know that AI consumes a lot of power, a lot of data centers are getting built. That’s just one type of customer that I think can benefit from conversational agents that help optimize decisions.

Recommendations for Getting Started

So I’ll conclude here by recommending ways to get started. In all likelihood, you’re with an enterprise that has distributed data sets, diverse data sets. You have a need to order that with metadata, break this bottleneck in order to interact and analyze that.

What I would recommend is:

  • Identify agentic AI use cases based on the upside to the business, the risk, and the data readiness—and don’t swing for the fences. Go for some solid base hits in terms of value-achievable, low-risk projects to start.
  • Prioritize your data sources based on the suitability and the AI readiness, the organization of the metadata.
  • Capitalize on opportunities to integrate metadata across catalogs and tools. We are seeing a rise in confidence among enterprises on their ability to integrate and organize metadata across catalogs.
  • Strongly consider the role of an independent semantic layer in your environment to help make sense of this.
  • Evaluate those semantic layer offerings based on their support for rich ecosystems and their ease of use.

So Prat, we’re glad you’re here with us today. Perhaps you can tell us a little bit about the trends you’re seeing with customers in terms of pain recognition that metadata matters, and then that can be a segue into describing how Promethium approaches this. But what customer pain are you seeing right now?

Prat Moghe: First off, thanks for inviting me, Kevin. It’s great to have this conversation with you and with Solutions Review. Thanks, Doug.

So the topic that we are talking about, Kevin—conversational AI agents—that is one of the top, what I would say is, it’s in the hype cycle right now, very much at the peak of the hype cycle in the enterprise. And enterprises have done a whole bunch of POCs with the promise of AI over the last year, but what we are seeing is that there’s been a disillusionment about that experience.

There was this thinking that you can take an AI model, you can connect it to the enterprise data, and lo and behold, you start actually getting self-service analytics and insights out of that data. The reality is exactly what you said—the enterprise data is not in one place, it’s not clean, context is all over the place. So there are a lot of challenges that are essentially frustrating the business leaders, the Chief Data and AI Officers.

They’re primarily trying to do exactly what you said. The use cases are all driven by clear need. You could be an insurance company and you’re looking at claims, you’re looking at underwriting, you’re looking at market research. And in all of those cases, the business leaders feel like they can get a competitive advantage. They can understand more about where the customers are. They can engage more effectively if they got access to data and they could ask more questions.

So everybody wants to ask ad hoc questions more quickly because they have a clear understanding of what they get through reports and dashboards, and then they want to ask the next question—which is trying to ask the why, the what, the how. What if we did this? What if we could do that? And all of those questions today, they have to wait for data engineering. They have to wait for pipelines. They have to wait for dashboards. And so the business is basically tired of that.

And then the business analysts and the business leaders want to do exactly the things that you talked about, which is everything from supply chain to pricing optimization. All of these problems are horizontal. We are not seeing just any one vertical. We are seeing retailers, we are seeing utilities, financial services, insurance companies—across the board. This is not just a marketing problem or a sales problem or a product problem or a supply chain problem. It goes across. It’s horizontal.

People have looked at different models. So this is not a model problem. They’ve looked at this—this is not a data platform problem. We find people have distributed data. Like you said, Snowflake customers are also Databricks customers. In fact, 30% of what we found is in the large enterprise—30% of those customers that have Databricks also have Snowflake. And then they have SaaS applications like Salesforce. They are on-prem, they are in the cloud, they’re multi-cloud. So it’s all over the place. In fact, the average Fortune 1000 customer has over 12 data platforms. People may not understand this. People think that if they’re modernized, that means they’re single platform. It’s actually very diverse.

So there’s a lot of excitement, but there is also frustration, and this question of: why am I not able to solve this problem when I can just do ChatGPT on public data and I get reasonably good answers? Why can’t I access my own data that I control and get answers? This is what every CEO is asking every CDO and AI officer. And as you just pointed out, it is not an easy problem. So at Promethium, that’s the problem we’ve been looking at for the past year, and we’ve engaged with many of those CDOs that went through those POCs and came out with scars. So we are excited that that’s a meaningful problem we think we can address.

Kevin Petrie: Fantastic. So tell us more about Promethium.

Prat Moghe: Thanks, Kevin. So we have a few slides I’d like to take you and the audience through.

Again, we are not talking about unstructured data where people are searching for data and asking for very specific insights—we’re mostly focused on structured and semi-structured data. Answers here have to be numeric. Answers have to give you a trend. Answers have to be precise because decisions are going to be linked to these insights.

And for production, what we’ve seen is there’s a whole bunch of agents in the market today. Every platform has an agent. Whether it’s Cortex and Genie on the data platforms, or whether you think about SaaS applications like Salesforce, or BI agents like Tableau and Hotspot, catalogs support agents like Atlan and Alation—there’s a whole ecosystem. Every platform supports their agents, and obviously each of the models have agents. Yet only a small minority of the questions that people will ask—in terms of revenue, churn, whatever the key KPIs are—broad-based ad hoc questions actually can be answered today in an accurate way through these agents.

What we found is, just like you said, it’s about solving three things: getting access to data that’s distributed; getting access to the context (what we call metadata, but we sometimes use the word context); and like you said, it’s about bringing business context, technical context, and governance context—the metadata—all together. And then being able to access anybody in the business regardless of the tool or the channel that they are in. So they could be in their own BI tool, they could be in an application, they could be in an enterprise agent. You want to be able to access that.

AI Insights Fabric

So at Promethium, that’s the problem we’ve been solving. We’ve developed an open agentic platform called AI Insights Fabric. It essentially sits into your enterprise environment without making any changes into your stack.

And the idea is questions can come from the business layer into the AI Insights Fabric. The AI Insights Fabric will query your live data that’s in different data platforms without moving the data. And the AI Insights Fabric will use the context from a variety of your context sources—whether it’s a catalog, or documents, or semantic models that could come from your BI tools, or your past dashboards, and other business rules—and use that to basically resolve those questions to accurate answers that are built in the AI Insights Fabric. And then that is fed back to the business. That’s essentially how Promethium works.

And the goal of Promethium is to essentially accelerate your accuracy versus context engineering cost. If you look at all these large enterprises today, Kevin, when they look at a particular domain—let’s say it’s marketing—they’ll look at where the data is, they’ll start to look around where the context is, and they have to basically test it with a bunch of questions. They’ll ask questions. They look at how accurate those answers are, and they go through this exercise of figuring out how much context is needed so that they can answer these questions reliably.

Well, that curve—accuracy to context engineering—is like an S-curve. The more context engineering you put in, the more accurate results you get. But the threshold for this accuracy is really what is acceptable enough for production—for a business user to ask questions and get good answers. And it’s not easy to make them happy. You have to do significant context engineering. And this is not a scalable S-curve in most of these enterprises, because once you train this on, say, marketing, you have to move to the next domain and you have to do this work all over again. And every user has different expectations, their level of impatience, their intent is different.

So the goal with the Promethium framework is to shrink that S-curve down to the left so that very quickly you can get accuracy at the production level. The trick to making this work is firstly getting access to all the data without moving it, but then also on top of it, getting access to all your context and curating the right context.

360 Context Hub

In our case, it’s really a five-level context. We call it a 360 Context Hub that brings context from five levels:

  1.       Technical metadata
  2.       Relationships
  3.       Business concepts that exist in the catalogs
  4.       Things from the semantic layer which define measures and metrics and policies
  5.       And finally, everything that comes from the intent—who is the person, what are their preferences

All of these are encoded into the agent memory so that the next set of users in the same domain can essentially reuse what has been built in the past. So that’s what we do.

Three-Layer Architecture

And essentially it’s a three-layered architecture:

Query engine at the bottom: We use open-source Trino that allows us to get access to a variety of sources, but we’ve gone and enhanced that so that it’s performant, uses pushdown with fine-grained access control.

The 360 Context Hub: As I said, it brings this five-level context in. Sometimes “context graph” is a term you’ve now started to hear from a variety of sources. Context graphs are very hard to build in general because there’s no direction to those graphs, whereas here you’re trying to build for a specific problem, and so we can build a directed graph tied to the accuracy of a specific question.

At the top, the Data Answer Agent: This basically either allows you to sit in an agent and type questions and answers, or this is headless and any agent in your enterprise can basically tap into this AI Insights Fabric to get access to data and context.

So essentially there’s an AI analyst that helps any business analyst get access to this data in context. This is a productivity play. Or we have also an AI BI or an AI readiness kind of a play where any agent can plug into our AI Insights Fabric.

Live Demo

I’m going to end with a very quick live example. This is showing you Mantra, which is basically our AI analyst, where you can start asking questions. In this case, I’m asking the first question, which is again—business people are very interested in: what data do I have access to? And the system can come back and, for this insurance case, it could say you have access to all this claims data. And once I see it, then I’d be interested in asking more.

So in this case, I’m asking the question: what are my most profitable policies? And again, live, the system is going, looking at all the discovered data, the catalog, it’s going and querying that data, it’s bringing in context from a variety of places. In this case, you will notice that it’s giving you answers based on the word “profit.” Profit doesn’t exist on any of your tables. Profit’s coming from a business layer—either from a catalog or a semantic layer—and that’s being stitched in live. So this is basically how you break the metadata bottleneck.

How do I know this is accurate? Well, I can examine the code. If I’m a business analyst and I’m SQL-savvy, I can actually look at the code. I could look at the lineage. Where is this thing coming from? And if I am somebody who is savvy with my data, I have a very clear understanding that this is good. I could then endorse this, and this gets saved for others to use.

So this is how we are thinking that the way you connect business users to the data is you’ve got to build on that trust: data access, the governance and context in the middle, and then insights on the top. And if you can bring all of these things together in a way that’s open—where you can access this across any data, any context, and any tool—then we think everybody wins in this.

So that’s essentially the summary of how Promethium works. And the exciting thing is we are in production in many of these enterprises. We’ve seen that S-curve shrink. And so we are very excited to team up with CDOs and CIOs and make sure that the business can get access to data. So that’s a quick summary of what Promethium is.

Kevin Petrie: So Prat, this is great stuff. And before we go to live questions, I’d love to understand—one of the things that I think is interesting about Promethium is that you speak more about the intent of users than some of the other vendors or thought leaders in this space, which I think makes perfect sense because ultimately we want to help humans make good decisions.

Maybe you could just talk about the dimensions of figuring out what the intent of a person is and how you assemble that context on a real-time basis, because there are all sorts of dimensions, all sorts of data out there, but the user needs quick action. So what goes into user intent and responding to that? And I should say that Prat and I went to a Celtics game yesterday, and I had a good interaction with your product leadership on this. So we’d love to hear more.

Prat Moghe: It’s exactly what we observed at the Celtics game, right? Like if you think about it, we were up in the rafters. And when you’re in the rafters and you’re in this exclusive area, everybody around you is essentially trained to take care of you. That’s not the same experience. I’ve had seats that were really inexpensive, right? And I’m literally part of the crowd, and I’ve had a great time there as well. But the requirements are different. Like I’m there yelling to say, can I grab some popcorn when the vendor is coming around?

So this whole idea of who you are trying to take care of as the customer—whether it’s a salesperson, whether it’s an executive, whether it’s somebody who’s the business analyst who’s savvy—my point is that we’ve seen power business analysts that are very savvy. They’re skeptical. They look at something and they’re like, “Hey, I know exactly where the data is. I know exactly how to go build these things.” Even then, when you think about it, you’ve got to give them all the details on when something is working, when it’s not working, when you don’t know if the answer is right or not. And I think if you do that, then you win their trust. And then they start saying, okay, what could I do with this thing that I couldn’t do before? And that’s what builds trust.

When you go one level less technical to somebody, they don’t want to know the details. They are going to look at the results and they’re going to say, does this thing make sense or does it not make sense? So as important as giving them the insight and the chart is to be able to give a likely explanation on what you’re seeing, why you’re seeing it—and then inviting them to basically engage. Does this thing make sense, or do you have a better idea?

So I think the key part, Kevin, that we’ve seen is the technology elements exist with agents. So when you look at the agent engineering aspect itself here, we have a discovery agent, we have a bunch of sub-agents that are actually going and doing the work. But the most important thing is to understand, based on the persona—like we have this concept of domain—based on the persona, you can start to see what’s the past work that has been done before.

Query history is a very powerful signal in this noise. It’s not like great work wasn’t done in these enterprises. They’ve been doing great work. A lot of this thing has been done before. So the key is to learn from that past history on what products were built, what were the dashboards, what were the endorsed queries, where does that data lie, what were the things that were joined in the past—all of those at different layers. If you bring that in and you build on top, that’s what builds trust. And then the key is to add this personalization on top, which is to say what would frankly add value to this customer. The last thing you want to do is for somebody who knows the obvious to give them one more copy of that same information.

It’s about understanding that the person who’s asking this, the channel they’re sitting in, the kind of information they’re expecting—those are the things that we have to be able to stitch together. And this is why we believe that ultimately, if you look at the layers—data platforms have agents, catalogs have agents, there’s also independent semantic layer products, and then there is BI or consumption layers—each of them has agents.

Today the challenge is, if somebody says “I want to be able to deliver an experience, let’s say I want to solve this conversational insights problem,” we think it’s going to be really hard if you are always in these silos of agents because each agent only understands something well.

Well, if I’m Kevin and I come in to watch Celtics and I’m in the rafters, somebody’s got to take care of Kevin from the time he comes to the parking lot, gets into the lobby, gets him through security into the right place, and then takes care of his drinks, makes sure he has a good time—all of that is an end-to-end thing.

And so to us, the struggle today with metadata context graphs—they’re very nebulous. But if you start with a specific business problem, which is that, hey, I’ve got this power analyst or this business leader and I need to go solve the customer analytics problem for them so that when they want to look for who to call next or what’s the offer I place in front of them, let’s go figure out what’s the context I need to bring in to solve that problem. What’s the data I need to access? And I can measure that by giving you an insight. And then if it’s not a good insight, they’ll say thumbs down. If it’s a good insight, they’ll endorse it. And that’s what builds the signal.

So our philosophy has been: we’re not necessarily out to compete with all the agents that already exist. Our idea is basically to go connect what exists so we can actually give an outcome that you couldn’t do before. And if you do that, it’s an open agentic platform. So we can hydrate back the context into a catalog. We can go and enable more context to existing agents like Cortex and Genie. We can push more results to the BI agents, and so everybody will win. So that’s kind of our way of addressing this problem.

Kevin Petrie: So to take that business user and convert them into a VIP data consumer.

Prat Moghe: Essentially—every persona, we believe ultimately, enable them, unlock their possibility. They have so much potential and excitement around AI and they just don’t know what they can do with it. And what we are frankly doing is trying to get out of the way and in a responsible way get them access to that data and give them the signal. And then the rest they’ll build. That’s our belief.

And we’ve seen that. We’ve seen power analysts that took three days to go build this next thing on who’s the next customer they should target to raise money, for example—they’ve shrunk that into like 6 hours just because we could get them access to data, get them access to context, and then they could just go experiment with questions. So I think that’s the key here.

Q&A Session

Doug Atkinson: All right, excellent job with that. And I want to jump into a little bit of a Q&A. I have a few questions here that we’ve been gathering up. The first one being: how do you mitigate governance risk when using conversational agents?

Kevin Petrie: So, we look a lot at governance. Most organizations have some semblance of a data governance program in place. They all recognize that data quality, as I mentioned, is the number one obstacle to AI success. So the task they have is to strengthen data governance and extend that program—policies, rules, standards—to address two new domains of risk. One is models and one is agents.

And it’s tricky. It requires bringing together people, roles, cross-functional teams and giving them an explicit charter to help envision what the risks are associated with traditional data problems—privacy, bias, quality, and so forth. And then figure out from a model perspective, how do you mitigate the risk of a model hallucinating? How do you respond to that or give guardrails so that the outputs can still provide relevant and valuable and accurate results?

On the agent side, it gets pretty interesting because you want to make sure that the agents—especially if they’re custom—they could have different intentions than what the humans have. So there are a lot of different ways to address those risks. I think the more you can standardize on a single platform, the more you can take advantage of an existing catalog that has these enriched governance capabilities, the better. And also make sure that your data engineers, your AI teams, and your developer teams are talking, speaking very closely with one another while they all serve the business to mitigate risk.

So no easy answers, but it needs to be done.

Doug Atkinson: This is not a negotiation of whether or not this is going to happen. This is going to happen. And so the question is: how does this impact the traditional data stack?

Prat Moghe: Many people believe that the modern data stack is dead. It’s done. Because it had this model of traditional platforms and data engineering and pipelines. And so what we are seeing is, particularly with conversational analytics and this idea of AI agents that come on top, they’re certainly going to compress that stack. And the value is going to be generated on top. And it’s going to be about context. It’s going to be about access to different models. It’s going to be about personalization at scale.

So in many cases, you’re already seeing that in terms of traditional players versus the new players. There is a lot of value in the way the traditional data stack was built—in terms of governance, in terms of where the value is. It’s still in the data and the context. So we believe that an open agentic architecture should be evolutionary, and it should basically be able to tap into what has been built. And if you can do it in the right way, you don’t have to throw away the old investments and you can build on top—so that you’re accelerating time to market, you’re managing governance, and at the same time you’re showing value.

But it is certainly going to be a very interesting challenge and tension, Doug, between what existed and what’s coming. And you’re going to see that play out.

Doug Atkinson: Well, I’m glad you brought up context because I think this is where it gets really interesting when you’re talking about AI versus human, neural networks, and so on. How do you handle conflicting definitions with regard to context? The same metric meaning different things across different domains. How do you handle that?

Kevin Petrie: We’re seeing a rise in confidence among organizations in their ability to share metadata across catalogs. Catalogs are meant to contain a glossary in a lot of cases that defines specific business terms. You’ve also got master data management, which helps standardize terminology to apply to different business entities. So it requires kind of a careful orchestration of MDM platforms, maybe a master data registry, a glossary often within a catalog, and then it requires sharing that metadata across platforms.

We see most organizations have let’s say two to four catalogs, and they’re actually getting pretty good at sharing metadata across those. So it does require the ability to kind of federate, standardize, and federate across platforms. And organizations are definitely tackling this because it’s absolutely critical.

Prat Moghe: I like this question you asked—which is the same term meaning different things based on the group you’re in or the department you’re in. If you’re a salesperson, you’re asking about revenue versus you’re a finance person asking about revenue—they could resolve to very different data that’s meaningful.

And so I think what’s again important is when you are constructing a conversational agent stack, you’ve got to flow through who’s the user, what group or department they’re in—a domain, almost like a mesh-like concept—and you’ve got to feed that into what’s the past work they’ve done in this, where they got results they were happy with. So there’s a lot of signal so that all this context and metadata that Kevin talked about, you can curate that to say this is the meaningful stuff that I can use.

So when the salesperson said “revenue,” it’s this table number 63 in Snowflake, and I want to go tie it with something else, and then I can feed back results. And all of that has to be done dynamically so that the next salesperson who asks the question—you’re not trying to do this work over and over again. It’s already done in the past.

So you want to build institutional memory based on people, but you want to do it one group at a time versus kind of doing it, you know, “let’s boil the ocean and do it for the whole enterprise.” So that’s the top-down approach that we think works and makes it more scalable.

Doug Atkinson: So I’m going to throw one last question out and then we’ll wrap things up. But I’m going to go with you first, Kevin. And that is: somebody’s watching an event like this and they are trying to get their ducks in a row. They know this is something they have to deal with, and they want to take advantage of all that’s available to them. What piece of advice would you offer for them that they can do in preparation for beginning an engagement and having a conversation with a company like Promethium?

Kevin Petrie: The nice thing about technology is that the technology changes very fast, and that’s fun and exciting, but a lot of the principles of success are somewhat timeless. And so what I’ll say are things that are perhaps obvious but always bear repeating.

Let’s start with business pain. Looking across your organization, who’s in pain right now? What decision makers are in pain because they can’t get the right contextual information to make smarter decisions? And then what aspect of that, what sort of self-contained aspect of that, might you be able to improve, deliver business results—better revenue, or more likely, lower costs—in a three to six month time frame?

If you start to take a commercial conversational analytics agent tool and apply it to your data, you want to make sure that the metadata challenge is surmountable in the near-term time frame. So do something that’s lower risk, maybe lower reward, but demonstrable reward, and kind of go from there. But start by looking for business pain among existing consumers of data.

Prat Moghe: Yeah, the only thing I would add—I mean, this is spot on—is start, follow the business pain, identify a use case where you’re like, these are the top three or five questions that I’d like to answer. And then use that as a way to sort of see this thing work actually in production. And then see if you can onboard more users and then move to the next domain.

I think the key is don’t think of this as a departmental problem. Think of it as an enterprise problem, but then sequence it with the first one or two departments and use that as a way to build confidence. The key is to think in a way where it’s not just siloed on a single tool or a single set of users, because this is an enterprise-wide problem.

Doug Atkinson: Yeah. And this has been a great event, and we appreciate your time. There is no question that this is moving very quickly. But the biggest mistake you can make is just sitting there being paralyzed and expecting something to come along and grab you. You have to reach out. You have to start to engage. Certainly should be engaging a company like Promethium. And if you want to follow everything that’s going on, by all means follow BARC. These guys are dialed in completely in terms of where everything is headed.

So Prat, Kevin, thanks very much for the time today. Great event. Appreciate it.

Kevin Petrie: Thank you. I appreciate it. Great to learn more about Promethium as well. Doug, appreciate the plug.

Prat Moghe: Thank you everyone.

Doug Atkinson: Thank you guys. Well, there you have it. Another solution in our spotlight. We want to thank Promethium for their participation today, and we appreciate your attendance as well. Until next time, I’m Doug Atkinson here at Solutions Review. Thanks for watching.

Related Blog Posts

February 3, 2026

New Episode: Kjersten Moody on The AI Data Fabric Show

Former 3x CDO Kjersten Moody shares hard-won lessons from Unilever, State Farm, and Prudential on why thinking local unlocks global impact, how governance enables speed, and why AI is reshaping enterprise leadership....

Continue Reading »
January 20, 2026

The Context Engineering Challenge No One Talks About

AI accuracy doesn’t fail because models can’t write SQL — it fails because enterprises underestimate the cost and complexity of engineering business context at scale.

Continue Reading »
January 14, 2026

What To Do When Your AI Initiatives Are Stalling

AI initiatives aren't stalling because the technology isn't ready — they're stalling because most enterprise data architectures were designed for centralized warehouses and predictable questions, not distributed data...

Continue Reading »