TL;DR: Agentic AI is coming for your data management workflows, whether you’re ready or not. Before you deploy autonomous agents, you need four things working together: active metadata, a semantic layer, quantifiable trust scoring, and policy as code. The foundation determines everything.

Twelve months from now, agentic AI will fundamentally change how your organization manages data—if you are ready for it. And if you’re not, you’ll be falling behind any competitors who are. The data management functions your teams handle manually today – policy execution, data provisioning, quality enforcement, lineage tracking – are on a path toward full automation. Not someday. We’re talking months.

“The question is not whether autonomous data management is coming. The question is whether your foundation is ready to support it.”

That is not speculation. It is the trajectory that intelligent agents, active metadata, and reasoning-capable AI are already making possible. Gartner predicts at least 15% of day-to-day work decisions will be made autonomously through agentic AI by 2028—up from 0% in 2024.1

The question is not whether autonomous data management is coming. The question is whether your foundation is ready to support it.

After nearly 25 years in data management and conversations with 30 to 50 enterprise customers every week, I can tell you that the bottleneck is almost never the technology. It is the data foundation the technology is sitting on. More often than not, that foundation is fragmented across too many disconnected tools to support what autonomous AI actually needs. As I discussed in a recent DBTA webcast, get the foundation right, and the path to autonomous data management becomes remarkably clear. The future of data is not just governed. It is self-governing. But only if we design it that way.

What agentic AI-driven data management looks like in practice

To make this concrete, consider what a typical day looks like today for a head of data governance. Policy execution means someone reads a regulatory document, manually translates its requirements into action mandates, identifies which data sets those mandates apply to, and waits for a quarterly audit to catch violations. Data provisioning means routing requests across multiple teams—architecture, analytics, quality—before anything reaches the person who asked for it.

Now imagine all of that automated. Policies entered as machine-readable code, enforced in real time. Data provisioning triggered by a conversational request and fulfilled autonomously—with trust signals, lineage context, and quality scores already attached. That is where agentic AI is taking data management. And that is what your data foundation needs to be built to support this move to agentic AI. Not as a collection of point solutions, but as an integrated environment where each capability reinforces the others.

“The future of data is not just governed. It is self-governing. But only if we design it that way.”

The four enabling capabilities you need in place

There is no shortcut here. To reach true autonomous data management, four capabilities need to be operational, and they need to work together, before your agentic AI workflows go live.

1. Active metadata

Static metadata tells you what exists. Active metadata tells you what to do about it. If you search for customer data and get back a list of 15 tables, that is static metadata—useful, but passive. Active metadata looks at those 15 tables, recognizes the redundancy, and recommends consolidation. It turns information into insight and insight into action.

For AI agents to function autonomously, they need metadata that is continuously detected, automatically updated, and actionable across your entire data landscape—not just the systems one particular tool happens to connect to. If your metadata coverage has gaps, your agents will have blind spots. And blind spots at automation speed create failures at enterprise scale.

2. A semantic layer

When you share a data set with a human colleague, you can rely on shared context—they understand what “customer” means in your business, which systems are the source of record, which fields carry regulatory sensitivity. When you share that same data set with an AI agent, none of that context transfers automatically.

Think of an AI agent as a highly capable analyst who needs everything explained from scratch. Your business terms, your data relationships, your domain-specific definitions—all of it needs to be encoded into a semantic layer the agent can reliably interpret. Without this, you are not just risking misinterpretation. You are risking misinterpretation at scale, at speed, without a human in the loop to catch it. And if that semantic layer is not maintained consistently across your modeling, cataloging, and governance environments, the context gaps will show up exactly where you can least afford them.

3. A quantifiable trust score

“Do you trust this data?” cannot remain a conversational question. For autonomous data management to work, trust needs to be measurable. For that to happen, you need specific, consistent dimensions that reflect what trustworthy data actually looks like in your organization. Is it coming from the right source? Has it been curated? What is its quality score? How widely is it used? How recently was it validated?

Every one of those dimensions contributes to a trust score that agents and data consumers can act on with confidence. Critically, that score needs to be transparent and explainable, not a black box. It needs to update automatically as underlying data conditions change. And it needs to be embedded directly in the platform where data products are created and consumed, not maintained separately in a tool that only some teams have access to.

4. Policy as code

Traditional governance lives in PDF documents. Someone reads those documents, interprets them, translates them into action items, and then waits for an audit cycle to find out whether anything was violated. That model was built for human-speed decision making.

Policy as code means your governance rules are encoded directly into your data environment—machine-readable, automatically enforced, and continuously validated before execution, not discovered after the fact. When an AI agent encounters a policy boundary, it does not need a human to interpret the rule. The rule is already in the system. This matters especially in hybrid and multi-cloud environments, where data moves across jurisdictions and platforms and manual policy enforcement simply cannot keep up. Governance that cannot follow your data wherever it lives is governance in name only.

Where the red line is—and why it matters

Autonomy without defined limits is not innovation. It is recklessness. Every autonomous data workflow needs a clear answer to three questions: What can the agent do on its own? When does it escalate to a human? What is it never permitted to do?

Think of it like Isaac Asimov’s three laws of robotics: every agent you deploy needs embedded rules it cannot override. An AI agent should never contradict regulatory compliance controls. It should not delete critical data assets. It should not change enterprise-wide business term definitions or alter the logic of financial reports without explicit human authorization. These are not edge cases. They are the non-negotiables that need to be defined before you go live, not discovered after something breaks.

The good news is that the goal is not to limit what agents can do. It is to enable them to do more, safely. Machines excel at pattern recognition and optimization. Humans should be the ones authorizing the decisions that matter most. That division of responsibility—clearly articulated, technically enforced, and consistent across every environment your data touches—is what makes agentic AI in data management trustworthy.

The adoption gap is real and it is a data problem

There is a growing gap between what agentic AI can deliver for data management and how quickly enterprises are actually adopting it. The cause is not resistance to AI. It is that organizations are trying to deploy autonomous capabilities on data foundations that were never designed to support them.

You cannot automate what is unstructured. You cannot build trust into a system that has never measured trust. You cannot enforce policies in real time if those policies only exist as documents. And you cannot scale any of this if your data management capabilities are spread across a fragmented collection of tools that do not share a common model of your data. The adoption gap closes when the foundation gets fixed—not before.

“The adoption gap closes when the foundation gets fixed—not before.”

Your starting point: one domain, one agent, one capability

Agentic AI capabilities in data management are not a switch you can just flip. It is a capability you build and measure over time. If you want to make meaningful progress in the next 90 days, the approach is straightforward: pick one domain, for example, finance, HR, or risk. Define your data products within that domain, including lineage, quality contracts, and trust metrics. Deploy one agent against one well-scoped autonomous capability: drift detection, data quality remediation, or policy checking. Draw a precise line between what the agent handles independently and what requires human approval.

As you move through that process, your autonomous capability will improve, your team will build confidence, and you will have the proof point you need to scale. What you are building is not just a technology deployment. It is a new operating model for how your organization manages, trusts, and activates data—one that positions you to capture the full value of agentic AI as it matures.

The agentic AI foundation is everything. Treat it that way.

Source:

  1. Gartner, “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027,”  June 2025

GARTNER is a trademark of Gartner, Inc. and/or its affiliates.

Gartner does not endorse any company, vendor, product or service depicted in its publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner publications consist of the opinions of Gartner’s business and technology insights organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this publication, including any warranties of merchantability or fitness for a particular purpose.

Yetkin Ozkucur brings over 20 years of experience in Data Intelligence space and leads a global team of data professionals at Erwin by Quest. Yetkin has delivered implementations, data governance programs and proof of concepts to wide range of clients including financial, insurance, healthcare, manufacturing and retail. He is responsible for delivering and guiding Data Intelligence solutions, implementation best practices, and presales activities.

Build your agentic AI foundation

Get the strategies, technologies, and best practices for creating intelligent data environments that let AI agents operate at enterprise scale.