Building Trusted AI for Business Insights: Beyond the Hype

Building Trusted AI for Business Insights: Beyond the Hype
Insights Achieved Podcast
Building Trusted AI for Business Insights: Beyond the Hype

Sep 13 2025 | 00:06:38

/
Episode September 13, 2025 00:06:38

Show Notes

In this episode, we confront one of the toughest questions facing executives today: can generative AI really be trusted for business intelligence? While tools like Copilot and Cortex promise quick insights, they often fall short when accuracy, security, and transparency are on the line. We unpack the risks of relying on GenAI for decision making, explore the CEO’s concern about sensitive data, and introduce a five-pillar framework for building AI that leaders can rely on. Finally, we look ahead to the rise of Large Concept Models that combine LLMs with knowledge graphs to deliver insights that are both fast and […]
View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Welcome back for another deep dive. Today we're looking into, well, the buzz around generative AI for business insights. You know, tools like Microsoft Copilot, Snowflake Cortex. They seem to be everywhere and they're promising to seriously speed things up, give us faster answers, sharper insights. [00:00:18] Speaker B: But the big question is, will they really deliver on that promise? [00:00:21] Speaker A: Right? [00:00:21] Speaker B: Especially in a business context. [00:00:23] Speaker A: Right. [00:00:23] Speaker B: See, while generative AI is frankly brilliant for creating content, writing text, even code, it often kind of stumbles when you need it for serious business intelligence. [00:00:34] Speaker A: How so? [00:00:34] Speaker B: Well, it struggles with accuracy, it can misinterpret the context of your business data, and crucially, it usually can't show you how it got an answer. [00:00:43] Speaker A: Ah, the black box problem. [00:00:44] Speaker B: Exactly. So when an executive asks that key question, can I actually trust this number enough to make a decision? The answer from generic gen AI is often, well, shaky. [00:00:54] Speaker A: So our mission today is really to unpack that trust gap, understand why it's there, and maybe figure out how we bridge it. [00:00:59] Speaker B: Precisely. [00:01:00] Speaker A: Okay, so let's dig into those limitations you mentioned. Hallucinations, context issues, the lack of transparent reasoning. It sounds a bit like what Yann Lecun at Meta talks about, right? That these LLMs, these large language models, have pretty limited logic, no real persistent memory. They can't truly reason or plan. [00:01:22] Speaker B: That's a great point. And when you apply that limitation directly to enterprise data, you introduced some really significant risks. It's not just theoretical. [00:01:30] Speaker A: What kind of risks are we talking about? [00:01:32] Speaker B: Okay, first, security. Think about the access these AI agents might need. Often it's overly broad permissions, which, you know, opens up serious vulnerabilities. Okay, then there's privacy. If your sensitive company data gets fed into, say, a public cloud model, it could completely leave your control. It bypasses all the careful application level security you've built. [00:01:51] Speaker A: That's a big one. [00:01:52] Speaker B: Huge. And then there's the opacity issue we touched on. Leaders can't see the logic this can lead to, well, a loss of human agency. If the AI starts suggesting or even doing things without proper review. [00:02:03] Speaker A: Right. Acting on flawed logic, Potentially. [00:02:06] Speaker B: And finally, something a bit more subtle. Data integrity drift. The AI might generate outputs that look right, maybe seem plausible, but they slowly become misaligned with your actual business definitions or metrics. It erodes trust over time, kind of sneakily. [00:02:24] Speaker A: That really hits home on the governance and privacy side. It brings us right back to that direct CEO question. If we put our sensitive data into one of these AI tools, who else can see it? Because most standard tools run in These shared multi tenant setups. Right, so that uncertainty is just baked in. [00:02:41] Speaker B: It is. But look, the solution isn't to just abandon generative AI. That's not feasible or smart. [00:02:46] Speaker A: So what do we do? [00:02:47] Speaker B: It's about how you deploy it. You need a single tenant kind of fenced in environment, One that you fully control. [00:02:53] Speaker A: Like a private instance. [00:02:54] Speaker B: Exactly. Think of it like giving your company secrets their own dedicated vault instead of, you know, a locker in a public gym. Single tenant means your data never mixes with anyone else's. And that isolation, that control, that security, it directly tackles that executive fear of about data exposure. It shifts AI from being this perceived threat to something you can actually start to trust. [00:03:16] Speaker A: But does putting everything in a single tenant box limit you? What about leveraging, you know, broader public data for insights? Is there a trade off? [00:03:26] Speaker B: That's a fair question. It's really about being smart with your architecture. [00:03:29] Speaker A: Yeah. [00:03:29] Speaker B: For your core sensitive business data, the stuff driving key decisions. Security and control have to be paramount, full stop. You can still bring in insights from public data, but you do it carefully, integrating them within your trusted environment, not by sending your crown jewels out into the wild. [00:03:46] Speaker A: Okay, so a controlled environment handles the security piece. But what does trusted AI really mean, beyond just keeping data safe and getting answers fast? What are the pillars of that trust? [00:03:56] Speaker B: That's the core of it, isn't it? Building genuine trust. We think about it in terms of five key pillars for enterprise AI. [00:04:03] Speaker A: Okay. [00:04:03] Speaker B: First, like we said, secure by design. That single tenant environment. No uncontrolled third party access. Foundational. [00:04:10] Speaker A: Got it. [00:04:10] Speaker B: Second, it has to be explainable. Every insight needs a clear reasoning chain. You need to see the how. No black box is allowed. [00:04:19] Speaker A: Makes sense. [00:04:19] Speaker B: Third, it must be grounded in your business data. Insights need to come from your semantic layer, your knowledge graph, not just generic text patterns pulled from the web. You, your context, your specific context. Fourth, always a human in the loop. AI suggests, it augments, it accelerates. But humans make the final call, especially on big decisions. [00:04:41] Speaker A: So it's a co pilot? Not the pilot for now. [00:04:43] Speaker B: For high stakes decisions, absolutely. [00:04:45] Speaker A: Yeah. [00:04:45] Speaker B: The human brings nuance, accountability, the ability to factor in things the AI might miss. [00:04:50] Speaker A: Okay, and the fifth pillar? [00:04:52] Speaker B: Continuously accurate. You need guardrails, things to prevent that data drift. We talked about ensuring the AI stays aligned with your defined business logic and metrics over time. [00:05:00] Speaker A: Right? So it stays reliable. [00:05:02] Speaker B: Exactly. Reliable and consistent. [00:05:03] Speaker A: So it sounds like the practical path is setting up that single source of truth, maybe using things like knowledge graphs to really ground the AI in the specifics. Of the business. Could you quickly unpack knowledge graph for folks? [00:05:14] Speaker B: Sure. Think of a knowledge graph as a detailed, structured map of your business. It connects all the important things, customers, products, regions, campaigns, and shows the precise relationships between them. [00:05:27] Speaker A: So it gives the AI actual understanding? [00:05:30] Speaker B: Yes. It moves beyond just matching words to understanding the meaning and context within your specific business. This lets us evolve from basic gen AI, which is kind of like a sophisticated parrot, to what we might call large concept models. [00:05:44] Speaker A: Large concept models? [00:05:45] Speaker B: Yeah. Imagine combining the language skills of an LLM with that deep structured understanding from the knowledge graph. Now the AI can explain why things happened, not just what. It can simulate scenarios, recommend actions based on your business logic. [00:06:00] Speaker A: That sounds like a pretty significant leap. [00:06:02] Speaker B: It is. [00:06:02] Speaker A: So, bringing this all together, the companies really getting ahead with AI aren't just playing with cool demos. They're actually building these trusted systems. [00:06:10] Speaker B: That's right. They're focused on delivering secure, transparent, accountable insights and doing it fast. Minutes, not weeks. So it actually impacts decisions. So at the end of the day, for executives, for data professionals listening, the question isn't really if you'll use AI anymore. That ship has sailed. [00:06:29] Speaker A: Right? [00:06:29] Speaker B: The real question, the tougher one, is this. Will the AI you do use be trusted enough to actually guide the business when it truly matters?

Other Episodes