The AI Agent Governance Stack Has Four Layers. The Industry Just Built Three.
Four enterprise vendors defined AI agent governance in 72 hours ahead of RSAC 2026. The definition is missing one critical layer: whether the documents agents read are actually correct.
What just happened
Between March 9 and March 11, 2026, four enterprise vendors announced major AI agent governance products in under 72 hours: AvePoint, OneTrust, Netskope, and Bedrock Data. The timing isn't coincidence. With RSAC 2026 opening March 23 in San Francisco, the pre-conference window is when security vendors publish their year-defining announcements. All four chose the same week.
What's worth paying attention to isn't the timing. It's what these four announcements, taken together, reveal about where enterprise AI governance now stands — and where it stops.
Why this week's announcements matter
Enterprise AI agent deployments have crossed a threshold. The question is no longer whether to govern AI agents. It's what governing them actually means in practice.
Four vendors, from four different angles — compliance, access control, network security, and data auditing — published answers in the same 72 hours. When that happens, you're watching a market settle on a shared definition. And that definition will matter. It will show up in procurement checklists, audit frameworks, and eventually regulatory guidance. The version of "AI agent governance" that enterprises internalize this spring will shape how they evaluate AI risk for the next several years.
Getting an accurate picture of what that definition covers — and what it leaves out — is worth more than the individual product announcements suggest.
The four launches
AvePoint AgentPulse Command Center (GA, March 9)
AvePoint's AgentPulse Command Center moved from preview to general availability on March 9, with the company ringing the NASDAQ opening bell the following morning. The product covers Microsoft 365 and Google Cloud agent environments — Copilot Studio, Microsoft Foundry, SharePoint agents, and Vertex AI — managed from a single dashboard.
Capabilities include agent discovery and inventory, shadow AI detection, cost tracking, data access monitoring, sharing permission analysis, and multi-tenant compliance.
AvePoint CTO John Peluso: "Organizations are no longer exploring whether or not to deploy AI agents; they're now trying to do this responsibly and at scale. AgentPulse makes this possible by giving organizations the visibility they need to ensure that every agent operates as efficiently as possible, within governance guardrails, and without hidden costs."
OneTrust AI Governance expansion (March 9)
OneTrust (valued at $4.5 billion, backed by Insight Partners and SoftBank Vision Fund 2) announced a major expansion of its AI governance platform the same day. The expansion shifts the platform from point-in-time compliance assessments toward continuous, real-time control.
New capabilities: continuous discovery of AI agents, models, and datasets; an AI policy manager with prebuilt standards-aligned policies; real-time guardrail enforcement with violation detection.
CPTO DV Lamba: "As AI becomes more embedded across the enterprise, organizations need governance that keeps pace. With these new capabilities, OneTrust advances AI governance from point-in-time compliance to continuous, runtime control."
Netskope One AI Security (March 11)
Netskope announced its One AI Security suite on March 11. The suite has four components: One Agentic Broker (visibility and control over MCP transactions), One AI Guardrails (blocks prompt injection and jailbreaking), One AI Gateway (policy enforcement for private AI deployments), and One AI Red Teaming (adversarial simulation). The suite will be demonstrated at RSAC 2026.
CEO Sanjay Beri: "The AI Supercycle is here, demanding a new standard for high-performance security and networking... we are delivering deep, real-time protection at the speed of inference."
Bedrock Data at RSA Conference 2026 (announced March 10)
Bedrock Data announced it will present at RSA Conference 2026 on what the company calls "the hardest problem in AI security: governing the data that AI agents access, process and act on." Planned sessions include an MCP-sensitive data sentinel demonstration and a live MCP server exploitation demo, focused on sensitive data detection inside agent pipelines, audit trail creation, and policy enforcement at the data layer.
The standard taking shape
Read these four announcements together and enterprise AI agent governance — as of March 11, 2026 — covers six things:
- Observability — see what agents are doing
- Access control — limit what data agents can reach
- Shadow AI prevention — detect unauthorized agents
- MCP security — secure the protocol agents use to call tools
- Cost control — track usage and spending
- Compliance — align with regulatory frameworks
These are real problems. These products are doing necessary work. An enterprise running agentic AI in production needs all of this. The market building these solutions is building the right solutions for the risks it has identified.
But there's a category of risk that doesn't appear anywhere on the list.
The layer nobody built
Every product announced this week answers one question: Is the agent behaving correctly? None of them answers a different question: Is the document the agent is reading correct?
Walk through each launch:
AvePoint can tell you that an agent accessed a SharePoint policy document at 2:47 PM Tuesday. It cannot tell you the document hasn't been updated since 2022 and contradicts three other files in the same library.
OneTrust can enforce a real-time guardrail preventing an agent from accessing a file marked "confidential." It cannot detect that a file marked "current pricing" contains rates that expired six months ago.
Netskope can block a prompt injection attack attempting to manipulate an agent's behavior. It cannot block an agent from confidently citing an outdated clinical protocol that exists in the knowledge base and was designed to be read.
Bedrock Data can build a sentinel that detects when agents touch sensitive data. It cannot tell you whether that sensitive document is internally consistent — or contradicts another document the agent processed five minutes earlier.
One caveat worth noting: AvePoint's documentation does mention "analysis of inactive, redundant, obsolete, and trivial data to improve data quality for AI outputs." This refers to identifying stale shared files for deletion or archiving. It flags that a document might be old. It doesn't audit whether the document's contents are accurate, internally consistent, or in conflict with other documents in the same knowledge base. That's a filing cabinet cleaner, not a knowledge accuracy system.
The agents are being governed. The knowledge is not.
What to watch
The governance products announced this week are essential infrastructure. But governing agent behavior and governing agent knowledge are two different problems. An enterprise that deploys all four of these products can still produce systematically wrong AI outputs — because the issue isn't what agents do, it's what they know.
The missing layer is knowledge accuracy management: ensuring the documents agents read are current, consistent, and contradiction-free before the agent reads them. Mojar AI is built for this layer — active knowledge management, automated contradiction detection, and feedback-driven accuracy maintenance for enterprise document stores. It's a gap we've examined in the context of enterprise AI security and in how AI agent credentials interact with knowledge governance.
As RSAC 2026 opens March 23, the six-point governance framework defined this week will get significant conference-floor attention. Access, security, and observability are becoming table stakes. The question of what happens when the documents agents are governed to read are themselves wrong is where this conversation goes next. It's not a small question.