Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Enterprise AI Has Four Security Layers. Only Three Are Getting Built.

OpenAI just paid $119M for AI agent security. The SEC is auditing AI governance. But the most dangerous AI vulnerability — inaccurate knowledge — has no buyer yet.

6 min read• March 11, 2026View raw markdown
AI SecurityEnterprise AIKnowledge ManagementComplianceFinancial Services

OpenAI just paid approximately $119 million for Promptfoo, an AI security startup already running inside roughly 25% of Fortune 500 companies (Bloomberg, March 9). The SEC is now formally examining how financial firms govern their AI use. Every major infrastructure vendor at Enterprise Connect this week announced new agent resilience, identity controls, or security testing tools. Enterprise AI security has never attracted this much capital or regulatory attention.

And none of it addresses the most dangerous thing that can go wrong.

The missing layer

The enterprise AI security stack has three visible layers, and they're being built fast.

Layer 1 — Infrastructure resilience. Cohesity's new integration with ServiceNow, announced this week, recovers vector databases, agent memory, and model configurations after incidents. If something corrupts your AI infrastructure, you can restore it.

Layer 2 — Identity and access. Microsoft's Agent 365, Delinea, BeyondTrust — these govern which agents get access to which systems, authenticate non-human software, and give IT visibility into what agents are doing. We wrote about this problem earlier this week: 1 in 3 enterprise AI agents run without IT approval. Layer 2 tools are the answer.

Layer 3 — Agent behavior. This is what Promptfoo does. It red-teams agents for prompt injection attacks, jailbreaks, accidental data exposure, and tool misuse. OpenAI just paid $119M for it. Worth every dollar for what it protects against.

Layer 4 — Knowledge accuracy. Nobody has a budget line for this.

Layer 4 is the question of whether the content an AI retrieves is current, consistent, and correct. Not whether the agent was hacked. Not whether it was jailbroken. Not whether an attacker got in. Whether the underlying knowledge the agent draws on to make decisions and answer questions reflects reality.

An AI agent can pass every Layer 1-3 test and still serve a financial advisor a compliance policy that was superseded six months ago. The agent wasn't compromised. It was just wrong. And nobody built a system to catch that.

What the evidence shows

Start with what Promptfoo actually does — and what it doesn't.

Promptfoo's security testing capabilities cover prompt injection, jailbreaks, data exposure from tool outputs, and agent behavior under adversarial conditions. According to Forbes, it's designed to catch cases where agents behave unexpectedly when attacked. What it doesn't assess: whether the knowledge base the agent queries contains accurate, up-to-date information. That's not a criticism — it's a scope boundary. Layer 3 tools stop adversarial inputs. They have nothing to say about institutional document decay.

Now look at the SEC's 2026 examination priorities for financial firms (Financial Planning, March 10). Regulators are now asking whether firms have AI governance policies, whether they're monitoring employee AI use, whether they're representing AI capabilities truthfully to clients. These are Layer 2 and Layer 3 concerns. The SEC is not yet asking: "Is the information your AI retrieves from your knowledge base accurate?" That question is coming. It's just not here yet.

The Jitterbit 2026 AI Benchmark — published this week — found that 47% of IT leaders cite AI accountability rather than budget as the primary barrier to scaling agentic AI (Intelligent CIO). The same survey found the average enterprise now runs 28 AI agents, with plans to scale to 40 within a year. Firms are deploying agents faster than they're solving accountability. And accountability, so far, has meant identity and behavior. Not knowledge.

One more signal: GitLab's AI governance research, published this week in InfoQ, makes the point that detection without remediation is just noise. The same principle holds for knowledge: knowing your agent might surface wrong information isn't useful without a mechanism to prevent it. Monitoring isn't governance. Governance requires the ability to act on what you find.

There's also a trap embedded in Layer 1. Cohesity can restore a vector database to a point-in-time baseline after an incident. But if the knowledge stored there was already outdated before the incident, you've recovered an accurate snapshot of wrong information. Recovery and accuracy are different problems. The industry has a mature answer to recovery. It doesn't yet have a market leader for accuracy.

What Layer 4 actually looks like

This category of tooling is real, even if it doesn't have a budget line yet. A Layer 4 solution does a few specific things:

It runs scheduled audits across the knowledge base to identify content that's stale or no longer reflects current policy. Not a one-time review — a continuous process, because documents change and nobody tells the AI.

It scans for contradictions across documents. This happens more than most organizations want to admit: last quarter's policy sits next to this quarter's update, and the AI retrieves whichever one is more statistically likely to match the query. Both live in the knowledge base. Only one is correct.

It connects AI answer quality to source document health. When an AI gives a wrong answer — and users signal that, whether through feedback or downstream errors — a Layer 4 system traces back to the source document, identifies why the answer was wrong, and proposes remediation. The knowledge base improves based on what the AI gets wrong.

It handles scanned PDFs. Most financial services firms, healthcare organizations, and regulated industries have decades of documentation in PDF or image format. If the parsing is imprecise, the knowledge base is wrong from ingestion. A Layer 4 solution needs to handle this correctly at the foundation.

Platforms like Mojar AI sit in this layer — managing knowledge base accuracy, surfacing contradictions, and correcting content when AI answers fail. It's a different problem than access control or agent behavior testing, and it requires purpose-built tooling rather than an extension of Layers 1-3.

What happens next

The SEC's current AI examination focus is access and monitoring. That maps to Layers 2 and 3. The next cycle will ask about accuracy — whether the information AI systems serve to clients reflects current, verified policy. We've already seen the FTC move in this direction, clarifying that enterprise AI systems serving wrong information to consumers carry real regulatory risk under existing deceptive practices law. We've also seen the Jitterbit benchmark confirm that the knowledge foundation is the missing layer that's causing the gap.

Financial firms that build only three layers of four are building an incomplete defense. The enterprise AI security investment cycle is real, it's accelerating, and the money going in right now is mostly well-spent. But the fourth layer has no buyer. Not yet. The firms that start thinking about knowledge accuracy now won't be scrambling to retrofit it when the next examination cycle arrives and adds "are your AI systems factually correct?" to the list.

The question isn't whether Layer 4 becomes a compliance requirement. It's when.

Frequently Asked Questions

Layer 4 — knowledge accuracy — is the missing piece most enterprise security frameworks don't address. It covers whether the content an AI retrieves is current, consistent, and correct. Layers 1-3 protect infrastructure resilience, identity access, and agent behavior. None of them verify the accuracy of what the AI actually knows.

OpenAI acquired Promptfoo for approximately $119 million. Promptfoo is an AI security testing tool already used by roughly 25% of Fortune 500 companies. It red-teams AI agents for prompt injection, jailbreaks, accidental data exposure, and tool misuse — but does not assess whether the underlying knowledge base contains current or accurate information.

When an AI agent operates autonomously — executing workflows, advising customers, informing decisions — the accuracy of what it retrieves is an operational security concern. An agent that passes every security test can still serve a financial advisor an outdated compliance policy or trigger a workflow based on superseded procedures. The harm is real regardless of how 'secure' the agent is.

Related Resources

  • →Your AI Agents Have a Credentials Problem — And That's Only Half of It
  • →After March 11, Your AI Chatbot's Wrong Answers Might Be a Federal Compliance Problem
  • →Amazon's AI Outage Crisis Isn't an AI Problem — It's a Knowledge Problem
  • →88% of Enterprises Say They're AI-Ready. 61% Can't Ship Because Their Data Isn't Trusted.
← Back to all posts