Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Why 91% of Banks Running AI Are Doing It on Shaky Foundations

31.8% of financial institutions run AI in production. Only 9.5% rate their data infrastructure as ready. The SEC is examining that gap right now.

6 min read• March 12, 2026View raw markdown
Financial ServicesSECBankingBYOAIAI GovernanceKnowledge ManagementCompliance

The Number

31.8% of financial institutions have deployed AI into production. Only 9.5% rate their existing data infrastructure as "very prepared" to support it.

That's the finding from Wolters Kluwer's Q1 2026 Banking Compliance AI Trend Report, which surveyed 148 financial institutions (Wolters Kluwer).

Do the arithmetic: roughly 1 in 3 banks is running production AI on a data foundation the bank itself considers inadequate. Loan decisioning. Fraud detection. Compliance queries. Client-facing answers. Built on documents and data the institution's own leadership doesn't trust.

The SEC noticed. Examiners are in the field right now.

The Audit

The Wolters Kluwer numbers are worth sitting with, because they're not from a vendor selling a solution. This is self-reported confidence from compliance leaders inside the institutions themselves.

On data infrastructure, only 9.5% feel "very prepared" to support AI with existing data infrastructure. Another 48% describe themselves as "somewhat prepared" — which, in compliance language, is a sophisticated way of saying "we hope we don't get tested on this." On strategy, just 12.2% describe their AI strategy as "well-defined and resourced" (Corporate Compliance Insights). The majority are deploying technology they don't have a mature framework to govern. And only 26.4% say they're confident their AI initiatives align with regulatory requirements — not somewhat confident, confident. Three quarters of the industry is running AI with acknowledged uncertainty about whether it meets the rules it operates under.

What the data adds up to: financial services firms moved AI into production before they moved their data infrastructure into readiness. They knew. They moved anyway.

Now the SEC is examining that decision.

The Pattern

The compliance discussion this month is splitting across two problems that keep getting treated as separate. They're not.

The first is BYOAI. At Future Proof Citywide in Miami Beach (March 8-11), compliance experts were direct: employees using personal AI tools for work tasks is a named risk regulators are pressing on now. Alec Crawford, CEO of Verapath, put it plainly: if a contractor feeds client data into a public AI model, the RIA that hired the contractor carries the liability (Financial Planning). Not the contractor. The firm. Thomas Stewart, CEO of Hadrius, named the fix: CCOs and firm principals need to "get control" of how employees interact with AI and build real guardrails, not policies that live in a PDF nobody reads.

The second problem is the one the Wolters Kluwer data is actually measuring: the authorized enterprise AI. The official platform. The one that went through procurement, got security approved, and plugged into the firm's own document repository. That platform reads your firm's actual compliance library, your regulatory guidance documents, your procedures and policies. And per the survey, the institutions running those systems believe their data infrastructure is not fully prepared to support what they're asking it to do.

Switching from unauthorized personal AI to authorized enterprise AI doesn't fix the accuracy problem if the authorized system reads broken documents.

The compliance library with outdated guidance nobody updated when the regulation changed. The three versions of the same onboarding policy, two of which are wrong and all three of which are still in the system. The internal procedures verbally revised in a Q4 all-hands that never made it into the written documentation. This is the document estate most firms are feeding their production AI right now.

BYOAI produces AI outputs with no connection to firm documents. Enterprise AI on an ungoverned document estate produces AI outputs with confident citations to wrong firm documents. The regulators asking about the first problem will eventually ask about the second. The firms preparing only for the first are one examination away from discovering that.

The SEON data reinforces the disconnect from another angle. According to SEON via Corporate Compliance Insights, 98% of financial organizations integrate AI into fraud and AML workflows daily. Yet 94% plan to add headcount in 2026. Firms are running the AI and adding humans anyway. That's not skepticism about the technology. That's a trust deficit in the outputs — and the Wolters Kluwer data on data readiness is probably a good part of the explanation.

The Fix

This week, ComplyAI launched a compliance product specifically for financial services AI governance (March 11). The product category they're building is real and necessary: governance of how employees use AI, what tools are approved, how outputs are documented. The market is right to build it.

But notice the layer it addresses. AI use governance. Not AI knowledge governance.

The governance gap we've been watching across industries applies with particular force in financial services: the industry is building controls for how people interact with AI. Almost no one is systematically addressing whether the documents AI interacts with are worth trusting.

The distinction matters for examination preparation. The SEC asking "do you have policies for how employees use AI?" is a different question than "can you demonstrate the documents grounding your AI's compliance answers are accurate and current?" Most firms can answer the first. Few have a defensible answer to the second.

The answer to the second isn't more governance policy. It's treating the firm's knowledge base as a living system rather than a static archive. When a regulation changes, the relevant guidance documents update — not eventually, not when someone gets around to it, but as part of the workflow. When the AI produces an answer on a compliance topic, there's a traceable path from that answer back to a specific, dated, current source document. When that document contradicts another document in the same system, the conflict surfaces before the AI sees both versions as equally valid.

This is the infrastructure FTC examiners are also starting to probe: not just whether your AI discloses it's AI, but whether what it says is accurate. The SEC won't be far behind.

Mojar AI gives compliance teams the ability to audit their knowledge base for internal contradictions, update policy documents conversationally as regulatory guidance changes, and trace every AI-generated answer back to a specific source document with version attribution. When the examiner asks "what document grounded that AI answer?" — source attribution is the answer. Source accuracy is the precondition.

The Takeaway

The Wolters Kluwer number — 9.5% very prepared, 31.8% in production — is the most important statistic in financial services AI right now, and it's barely being discussed. The compliance industry is rightly focused on BYOAI, on AI use policies, on disclosure. The data infrastructure gap is the bigger exposure.

Firms with the most to lose in a 2026 SEC examination aren't the ones whose employees are using ChatGPT on the side. They're the ones running production AI on document estates they've never audited — confident in the tool, unaware of the foundation it's reading from.

The window to fix that before an examiner asks about it is narrowing. The SEC is examining now.

Frequently Asked Questions

BYOAI — Bring Your Own AI — refers to employees using personal AI tools (ChatGPT, etc.) for work tasks outside firm oversight. The SEC's 2026 Examination Priorities flag it as an active compliance risk: if an employee feeds client data into a public AI model, the firm is responsible for the outcome. Compliance experts at Future Proof Citywide (March 2026) identified it as one of the top AI governance issues regulators are pressing firms on now.

The SEC's 2026 Examination Priorities direct examiners to assess whether firms accurately represent their AI use to clients, have policies and procedures to monitor AI, and integrate regulatory technology appropriately. BYOAI is a specific named risk. The examination is active now — not a future deadline firms are preparing for.

AI use governance controls how employees interact with AI tools — policies, access, approved platforms. AI knowledge governance controls what the AI reads — ensuring the documents feeding your AI system are accurate, current, and internally consistent. Most financial institutions are building the first. Almost none are systematically addressing the second. The Wolters Kluwer data on data infrastructure readiness is measuring the gap.

Start by auditing the document estate that feeds your AI system: when were files last reviewed, do any contradict each other, are any outdated relative to current regulations? Then test your AI's source attribution — can every answer be traced to a specific, current document? If the answer is no to either, your data infrastructure isn't ready for AI in production, regardless of what governance tooling sits on top.

Related Resources

  • →After March 11, Your AI Chatbot's Wrong Answers Might Be a Federal Compliance Problem
  • →The AI Agent Governance Stack Has Four Layers. The Industry Just Built Three.
  • →88% of Enterprises Say They're AI-Ready. The Data Says Otherwise.
← Back to all posts