The Governance Layer for Clinical AI

Structured documentation for human-AI clinical decisions

When clinicians work alongside AI, the medical record goes silent on the most important part: how the decision was actually made. Evidify captures the full decision trajectory — the clinician's independent judgment, the AI recommendation, and the reasoning behind agreement or override — in a structured, tamper-evident record.

Patent Pending Academic Validation Underway AMIA 2026 Submitted
The Problem

No standard exists for documenting
clinician-AI disagreement

Over 500 US hospitals have deployed FDA-cleared diagnostic AI. Clinicians face a legal double bind: potential liability for following an incorrect AI recommendation and for overriding a correct one. Yet the medical record captures neither scenario.

The Silent Record

When an adverse outcome involves AI, the medical record does not show whether the clinician saw the AI recommendation, agreed with it, or overrode it — or why.

The Legal Double Bind

Clinicians can be held liable for following AI that was wrong and for overriding AI that was right. Structured documentation is the only available defense for either scenario.

The Regulatory Gap

The EU AI Act requires human oversight documentation for high-risk AI systems by August 2026. No insurer or professional society has published a compliance-ready standard.

Our Approach

Capture the full decision trajectory

Evidify instruments the clinician-AI interaction as a structured, auditable record — from independent human assessment through AI disclosure to final clinical decision.

1

Independent Assessment First

The clinician's diagnostic impression is captured and locked before any AI output is revealed, establishing that clinical judgment preceded AI influence.

2

Structured AI Disclosure

The AI recommendation is presented through a controlled, gate-enforced protocol with comprehension verification and error rate transparency.

3

Documented Override or Agreement

When the clinician's final decision differs from AI, structured reason codes and free-text rationale create a defensible record of deliberate clinical reasoning.

4

Tamper-Evident Audit Chain

Every interaction is logged to a cryptographic hash chain with sequential numbering, content hashing, and timestamp verification — research-grade and legally defensible.

Current Status

From research platform to governance standard

Evidify is in active development with academic validation underway and regulatory alignment across US and EU frameworks.

Validation

Platform demonstrated to academic radiology human factors researchers. Pilot study discussions underway at a major research university.

Publication

System demonstration and poster presentation submitted to AMIA 2026 Annual Symposium.

Intellectual Property

Patent pending on sequential disclosure methodology for clinical AI governance. U.S. App. No. 63/987,880.

Compliance

Architecture maps to HIPAA audit controls, EU AI Act Articles 12 and 14, GDPR, and 21 CFR Part 11.

Get in Touch

Let's talk

Evidify is seeking research partners, clinical validation sites, and strategic collaborators in malpractice insurance, health system risk management, and regulatory compliance.

If you're working on AI governance, documentation standards, or clinician-AI interaction research, I'd welcome a conversation.

Name Joshua M. Henderson, Ph.D.
Role Founder, Evidify LLC