I’ve seen this story play out dozens of times. An organization launches an AI proof-of-concept with huge excitement. Six months and half a million dollars later, it quietly dies in a PowerPoint deck because the data was untrustworthy, the context was missing, or nobody trusted the output enough to act on it.
Everyone knows “garbage in, disaster out,” yet most organizations still treat data readiness as something that happens after the model is built instead of before.
In my three decades leading data intelligence and governance initiatives for companies, and almost four years at Quest Software, I’ve watched the smartest leaders flip the script. As I recently explored in depth in a Driven by Data podcast, these leaders are building AI for AI: using artificial intelligence to manage, harmonize, govern, and serve data at the speed and scale modern AI models demand.
This isn’t marginal improvement. It’s an order-of-magnitude leap in results.
What AI for AI really means
I often describe an organization’s data landscape as an orchestra. Your data modelers, stewards, and catalog specialists each play their part beautifully, but without a conductor harmonizing their efforts, the performance falls flat.
AI for AI acts as that conductor, dynamically blending data modeling, governance, quality assessment, and cataloging into cohesive, trustworthy data products. Rather than requiring human orchestration for every initiative, AI coordinates these disciplines automatically, pulling together the right metadata, applying appropriate business rules, wrapping governance guardrails around data assets, and serving them up ready for consumption.
This approach doesn’t replace your existing investments—it amplifies them. The process that once required months of manual work now completes in two to three days.
As I said on the podcast: “Four to six months down to two to three days using AI.” It’s remarkable what can be done.
The autonomous data product revolution
“We’re creating our own problem that we’re trying to solve.”
This is how I described the problem the industry is facing because teams have critical calculations spread across hundreds of spreadsheets, reports scattered across different systems, and business rules documented in emails and tribal knowledge. When someone needs this data, they must hunt through sources, understand business context, verify quality, and check for privacy concerns.
Autonomous data products flip this model. You describe a business need in natural language, and AI generates a logical specification, discovers where relevant data resides, maps logical to physical structures, and wraps business guardrails around the entire package. The result is a complete, governed, trustworthy data product ready for your data marketplace.
And these data products don’t remain static. As users interact with them, the products mature over time:
- Quality improves through continuous feedback
- Documentation becomes richer and more accurate
- Trust scores increase based on usage patterns
- Collective intelligence refines what’s available
Data marketplaces: making trust consumable
Consider shopping on Amazon the same way data consumers browse your data marketplace.
They search for datasets, compare alternatives, read ratings from colleagues, and understand trust levels before committing to use specific assets.
This approach dramatically increases adoption because users naturally understand how to navigate, contribute feedback, and discover new assets. Every interaction feeds back into trust scoring and recommendation algorithms. And, the more this is utilized, the more trusted it becomes. It’s a flywheel effect on the trust we need in our data to confidently move ahead.
“AI today can be confidently wrong… it will suggest glue for your pizza.”
Anchovies? Maybe. Glue? Yuck.
But this is what happens when AI is fed bad data.
Organizations traditionally equate data trust with data quality. While quality matters, AI initiatives demand a more comprehensive view.
Modern trust models incorporate multiple facets:
- Data quality metrics (completeness, accuracy, freshness)
- Curation & documentation (business definitions, ownership)
- Source authority (system of record vs. shadow copy)
- Usage popularity and social proof (ratings, reviews, reuse)
- Governance & sensitivity (PII flags, compliance status)
Research from IDC1 shows that while 78% of organizations claim to fully trust AI, only 40% have invested to make systems demonstrably trustworthy through governance, explainability and ethical safeguards. This trust gap represents a critical vulnerability.
Like I said on the Driven by Data podcast, “Your AI model is going to suck if one part of that data is missing or you can’t trust it.”
It’s blunt, but it’s true. This is why data quality and governance matter.
The semantic layer imperative
Large language models speak naturally, but they lack understanding of your business context. Without a semantic layer, AI can confidently suggest absurd solutions because it draws from generic training data disconnected from your organizational reality.
Data models, business glossaries, and conceptual frameworks create this semantic layer. When properly integrated, they provide AI with context to filter out irrelevant data, prioritize authoritative sources, apply privacy controls, and align outputs with business objectives.
This semantic foundation enables LLMs to deliver responses grounded in your organizational reality rather than generic internet knowledge. It allows AI to self-correct and self-govern, staying within approved boundaries without constant human oversight.
Data management convergence: the platform imperative
You’ve likely accumulated data management tools over years of incremental acquisitions. A modeling tool here. A catalog there. Data quality assessment in another system. Together they create integration nightmares and limit your ability to leverage AI effectively.
According to Gartner®2, “sixty-three percent of organizations either do not have or are unsure if they have the right data management practices for AI.”
Don’t be afraid of this – the uncertainty creates opportunity. By building modern data foundations now, you position your organization among the minority that can deliver on AI’s promise. Data management platforms address this fragmentation by unifying modeling, cataloging, quality assessment, and governance capabilities in a single environment.
When everything exists on one platform, you can build AI capabilities that span the entire data management lifecycle. Automated data product creation can query logical models, profile physical tables, apply governance policies, and publish to your marketplace without complex integrations.
The path forward: three critical steps
This is ultimately where I landed in this podcast, and after years of diving deep into the data world: “If you’re not doing something to improve the business, then why are you doing it?”
I tell every executive I meet that your competitors face the same AI challenges you do. They struggle with data quality, fragmented tools, and slow time-to-market. The organizations that solve these problems first will pull ahead dramatically. They’ll deploy AI applications in days while you take months. They’ll scale initiatives confidently while you struggle with trust issues.
Here’s how to improve your business in three critical steps:
- Embrace data products as your fundamental unit of delivery. Stop trying to build comprehensive enterprise data models before enabling any AI use case. Package data product by data product, each one a complete, governed, trustworthy bundle ready for specific business needs.
- Adopt a unified data management platform rather than continuing to integrate point solutions. The fragmented approach might have worked when you had months to assemble resources, but AI demands seamless orchestration across modeling, governance, quality, and cataloging.
- Resist the temptation to build everything yourself. Rather, rely on a converged data management platform that allows organizations to manage, govern, and activate data consistently across hybrid and multi-cloud environments. It should be underpinned by trust scores and AI model certification to ensure reliable, successful AI.
Your next AI initiative doesn’t have to fail.
The question to ask is whether you’ll continue managing data the old way, accepting high AI failure rates as inevitable, or whether you’ll embrace AI for AI approaches that can deliver results by harmonizing your data management capabilities, enabling trust at scale, and delivering the speed AI demands.
With proper foundations – autonomous data products, comprehensive trust models, intuitive marketplaces, and unified platforms – you can confidently move from proof-of-concept to production. The data you need is probably already in your organization. You just need better ways to find it, trust it, and use it. That’s what AI for AI delivers.
I’ve seen the technology work. The approaches are proven in production environments.
Sources
- IDC eBook, sponsored by SAS, “Data and AI Impact Report,” EUR153787025, October 2025
- Gartner Press Release, “Lack of AI-Ready Data Puts AI Projects at Risk,” February 26, 2025, GARTNER® is a trademark of Gartner, Inc. and its affiliates.
