Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Epic's Agent Factory Will Deploy AI Across 85% of US Healthcare. Who's Keeping the Knowledge Accurate?

HIMSS26 solved identity and orchestration for healthcare AI. ECRI named AI the #1 patient safety threat. Nobody at the conference connected the two.

6 min read• March 11, 2026View raw markdown
healthcareAI agentspatient safetyHIMSS26Epicknowledge management

Epic dropped two major announcements at HIMSS26 this week. Agent Factory — a no-code, drag-and-drop builder that lets any healthcare department build and deploy AI agents — and Curiosity, their proprietary medical foundation models. Meanwhile, Imprivata announced Agentic Identity Management to govern who those agents are and what they can access.

The same week, ECRI published its 2026 Top 10 Patient Safety Concerns. AI diagnostic errors hit #1 — the first time AI has ever topped the list.

Nobody at the conference connected these two things.

The layer that's missing

The healthcare AI stack is close to complete. Epic Agent Factory handles orchestration. Imprivata handles agent identity and access governance. Curiosity and the underlying models are getting genuinely better. The identity layer is solved. The orchestration layer is solved.

What's not solved: the knowledge accuracy layer. The clinical protocols, formularies, billing codes, and compliance policies that agents actually read when they generate answers.

Here's why that matters. ECRI's central concern isn't about model capability — it's about automation bias. Clinicians are already conditioned to accept AI outputs without deep scrutiny. When an agent's source document has a contradiction, an outdated dosage, or a policy section nobody updated after the last regulatory change, the agent propagates that error confidently. The clinician confirms it. Nobody flags it. The damage accumulates downstream.

ECRI called this out plainly: "AI models are only as reliable as their training data; unexamined algorithms risk perpetuating gaps or biases that actively worsen health disparities." The concern isn't the model. The concern is what the model reads.

What Epic actually announced — and what it means at scale

Agent Factory matters because of who gets it. According to Epic's HIMSS26 presentations, 85% of Epic's US customer base is already actively using Epic AI. That's the majority of US health systems. Agent Factory puts a no-code builder in front of all of them — every department, not just IT.

A nurse manager can build an agent. A billing administrator can build an agent. A clinical educator, a pharmacy director, a compliance officer — anyone with an idea and a few hours.

Epic VP Phil Lindemann put it directly: "Our customers have so many great ideas for where AI can help. With a visual drag-and-drop builder, Agent Factory will help them to bring these ideas to life."

The early performance numbers from Epic's existing agents are real. Art (their ambient AI) detects lung cancer at 69% at The Christ Hospital — versus a 46% national average. Penny (revenue cycle AI) cut prior authorization time by 42% at Summit Health; 92% of its AI-generated responses were accepted without edits. Emmie (patient-facing agent) drove a 58% sustained reduction in billing customer service messages at Rush University Medical Center (HIT Consultant).

Those results are worth celebrating. They're also conditional.

Penny's 92% acceptance rate assumes the billing codes it reads are current. Art's lung cancer detection improvement assumes the clinical protocol it follows hasn't been superseded by updated FDA guidance. Emmie's reduction in billing inquiries assumes the policy document she's answering from doesn't have conflicting sections.

What happens when those assumptions fail? ECRI's January 2026 health tech hazard report documented AI chatbots that "suggested incorrect diagnoses, recommended unnecessary testing, and invented body parts." That's not a model failure. That's what happens when the knowledge layer is wrong and the agent doesn't know it.

Identity is solved. Knowledge isn't.

Imprivata's Agentic Identity Management — also announced this week — is a real solution to a real problem. AI agents need managed identities, least-privilege access, and real-time activity monitoring. Treating agents as governed entities in the same framework as human users is the right call. CEO Fran Rosch is right that "every action is secure, governed, and compliant."

Governance of who an agent is doesn't govern what that agent knows.

This is a pattern playing out across enterprise AI broadly — we've written about it in other sectors: the identity and orchestration layers attract investment and engineering attention. The knowledge accuracy layer doesn't have an obvious owner, so it doesn't get one. In a general enterprise context, that's a productivity problem. In healthcare, it's ECRI's #1 patient safety concern.

The credentials and governance question is half the picture. The knowledge accuracy question is the other half. At HIMSS26, only the first half had a vendor on stage.

What's actually missing

Every agent built in Agent Factory reads documents. Clinical protocols. Formularies. Billing codes. Compliance policies. Those documents live in knowledge bases maintained by whoever had bandwidth to maintain them — which, in a 400-bed health system running on staff stretched across three shifts, is not a process with a lot of rigor behind it.

Mojar's knowledge management platform exists specifically to close this gap: scanning source documents for contradictions, flagging outdated content before agents act on it, auto-remediating conflicts across the knowledge base. The idea is that the agents built on top — whether in Epic Agent Factory or anywhere else — are working from documents that have been actively kept accurate, not just uploaded and forgotten.

In most industries, outdated knowledge base content is a customer experience problem. In healthcare, per ECRI, it's the year's biggest patient safety threat.

The agents are coming regardless

No health system is going to pause Agent Factory deployment to first audit every policy document in its repository. That's not realistic. HIMSS26 showed what's next — agents across departments, agents with voice interfaces, agents making clinical recommendations at scale.

The question isn't whether those agents will deploy on imperfect knowledge. They will. The question is whether anyone owns the problem of keeping that knowledge accurate after deployment — and whether that ownership comes before or after something goes wrong.

ECRI named it the #1 threat. Epic handed out the builder. The accuracy layer is still unaddressed.

Sources: ECRI 2026 Top 10 Patient Safety Concerns | HIT Consultant — Epic HIMSS26 AI Recap | Healthcare IT News — Epic VP interview | FierceHealthcare — ECRI #1 patient safety threat | MedTech Dive — ECRI January 2026 hazards | Imprivata Agentic Identity Management | HIMSS26 Conference

Related Resources

  • →Enterprise AI Has Four Security Layers. Only Three Are Getting Built.
  • →Your AI Agents Have a Credentials Problem — And That's Only Half of It
← Back to all posts