An open framework for provider and patient-owned clinical AI
February 17, 2026
Thomas S. Anderson MD, MHA
On February 15, 2026, OpenAI hired Peter Steinberger, the creator of OpenClaw — the open-source AI agent that went from a side project to 200,000 GitHub stars in weeks. The project is moving to an open-source foundation, ensuring it remains community infrastructure rather than any single company's product. Meta, Microsoft, and OpenAI all competed for it. Sam Altman described a future that is "extremely multi-agent" — smart agents interacting with each other to do useful things for people. One developer proved that a personal AI agent — one that actually does things on your behalf — is not a research concept. It is here.
The age of AI personal agents has arrived. Every industry is about to be reshaped by a single question: who owns the agent?
In consumer tech, the answer is already playing out — open-source foundations, corporate acquisitions, platform wars. The personal agent is arriving for everyone. It will manage your calendar, your finances, your communication, your daily life. OpenClaw and its descendants will likely be the foundation.
In healthcare, the stakes could not be higher.
Currently, healthcare AI follows the same playbook as every other industry. Technology companies build it. Hospital systems buy it. Clinicians are told to use it. The result is tools optimized for revenue cycles and administrative throughput — not for clinical judgment, and not for the relationship between a provider and the human being who needs their help.
CareAgent is inspired by what OpenClaw proved is possible — that a personal AI agent can be open, extensible, and owned by the individual it serves. It will always share architectural principles with OpenClaw and evolve alongside it. This is a deliberate choice, and not just a technical one.
Imagine waking up with fever and abdominal pain that can't be ignored. Your CareAgent — the healthcare layer within your personal agent — already knows your medical history, your medications, your allergies. You authorize it to reach out for help. Your CareAgent contacts your provider's CareAgent directly. The agents communicate, asking you some additional questions. Within seconds a plan is formed and agreed upon by all participants. If that plan requires an in-person visit, the provider and the patient will meet — fully prepared. Vital decisions are made and confirmed — human to human.
But none of this works if personal agents and clinical agents can't speak to each other. They need shared foundations — common protocols, compatible architecture, the same underlying language. The future of healthcare AI is not personal agents in one silo and clinical agents in another. It is agents built from the beginning to meet each other wherever care happens. Open, shared foundations make that future possible. Proprietary walls make it impossible.
CareAgent exists because healthcare needs its own agent framework — one governed by clinical accountability, not corporate strategy — and because that framework must be open, interoperable, and built to communicate with the personal agents that every human being will soon carry with them.
There is a fact at the center of medicine that no technology can change:
While all clinical actions can be delegated to AI agents, the risk associated with those actions cannot be transferred away from the provider.
When a physician signs an order, they accept personal liability for what happens next. When a nurse administers a medication, they are accountable. When a therapist documents a treatment plan, their license is on the line.
No AI company will assume that liability. No hospital system will absorb it. No regulatory framework will eliminate it.
The risk is irreducible. It stays with the provider. Always.
This is not a problem to be solved. It is the design constraint that makes everything else possible.
A provider who owns their AI agent — and bears personal liability for its actions — will naturally govern that agent with the same judgment they apply to every resident, scribe, and colleague working under their authority today. Risk becomes the regulator. Not a bureaucratic one imposed from the outside. An intrinsic one — aligned with the provider's self-interest and the patient's safety simultaneously.
This is the foundation. Every design decision in CareAgent flows from this principle.
Every other approach to healthcare AI governance requires building something that doesn't exist. Product liability models need new legal frameworks. FDA regulation of clinical AI software is evolving but unsettled. Institutional liability models require health systems to accept risk they have no incentive to absorb.
The provider-ownership model requires none of this — because the regulatory infrastructure already exists.
State medical boards license providers and define their scope of practice. Institutional credentialing committees approve specific privileges. Specialty boards define competent practice. The malpractice and tort system has spent decades refining how liability works when a provider delegates clinical tasks — to residents, advanced practice providers, scribes, technicians — and those frameworks apply whether the delegate is human or artificial.
A CareAgent operating under a physician's license is governed by the same structures that govern every other agent operating under that license today. None of this has been formally tested for AI — no credentialing committee has credentialed an AI agent, no malpractice carrier has written a policy naming one. But the frameworks don't need to be invented. They need to be extended. The distance between "I am liable for the notes my scribe writes" and "I am liable for the notes my CareAgent writes" is not a gap. It is a step.
By keeping risk where it has always lived — with the provider — CareAgent inherits a regulatory ecosystem refined over generations. The alternative is waiting for new frameworks to be written, debated, lobbied, and implemented. The providers who need this cannot afford to wait.
A CareAgent is a personalized AI owned by an individual participant in the care relationship — provider or patient. It is the agent that represents its owner's interests, knowledge, and authority within the clinical encounter.
On the provider side, the CareAgent is the core of their professional AI. It operates under the provider's license and authority. It learns their clinical voice, their documentation patterns, their decision-making logic. Over time, it becomes an authentic extension of its owner — not a generic product sold to a hospital system, but their agent, shaped by their practice, accountable under their liability.
Every action it takes is logged to an immutable audit trail. Every capability it has is gated by the provider's credentials. It cannot exceed the scope of the human who owns it — not because of a terms-of-service agreement, but because the architecture makes it structurally impossible.
The CareAgent does not replace the provider. It gives them back the time, the capacity, and the leverage that the modern healthcare system has stolen from them — so they can focus on the thing that actually requires a human being: the care itself and the human being who needs it.
A CareAgent is not only for providers. Every patient has one too — the healthcare layer within their personal AI agent. It is the part of their agent that understands their health: their medical history, their medications, their allergies, their goals, their preferences. It is not a separate app or portal. It lives inside the personal agent they already use for everything else in their life.
When care is needed, the patient's CareAgent communicates directly with the provider's CareAgent — sharing what's relevant, withholding what isn't, and doing so entirely on the patient's terms. The patient controls what is shared, who it is shared with, and can revoke access at any time. No intermediary. No institution deciding what the patient can see or who can see it.
The patient owns their CareAgent the same way the provider owns theirs. That symmetry is the point. Healthcare stops being something done to the patient inside a system they don't control, and becomes something done with them — each side represented by an agent they own, meeting as equals wherever care happens.
Architecturally, the patient's CareAgent must speak the same language as the provider's — the same communication protocols, the same immutable record format, the same consent and identity standards. This is why CareAgent is built to function within the broader personal agent ecosystem rather than apart from it. OpenClaw and its descendants have already established the plugin and skill architecture that makes this possible. A patient's CareAgent can be implemented as a healthcare capability within their personal agent — one that knows how to participate in clinical communication, maintain a health record that is append-only and tamper-proof, and interact with any provider's CareAgent using shared, open protocols. Without this interoperability, the vision collapses into the same siloed, institution-controlled model that exists today.
You didn't go into healthcare to be a data entry clerk.
Physicians spend more time documenting than healing. Nurses chart more than they care. The electronic medical record — built to serve billing and compliance — has become the centerpiece of clinical life, and the people you're trying to help have become secondary to it. Burnout isn't a personal failure. It's a structural one.
Your CareAgent is the AI that learns your way of practicing — your voice, your reasoning, your judgment. It's not a product imposed on you by your hospital system's IT department. It's yours. It works the way you work. And when it acts, it acts under your authority, because you're the one who trained it and you're the one who's accountable for it.
You are not a medical record number. You are a human being with a life, and your health is one part of that life.
In the world that is coming — and coming fast — every person will have their own personal AI agent. It will help you manage your daily life in ways we are only beginning to imagine. Your CareAgent is the part of that agent that understands your health: your history, your medications, your concerns, your goals.
When you see a healthcare provider, your CareAgent and their CareAgent will communicate. Your CareAgent will share what's relevant. Their CareAgent will help them understand your context. The clinical encounter becomes what it always should have been: a meeting between two human beings, each supported by an AI that represents their interests — the provider's clinical judgment on one side, your health and your preferences on the other.
But here is what does not change: you and your provider are the decision-makers. The agents serve. The humans decide. Your data belongs to you. Your choices belong to you. The AI is the medium. You are the authority.
The administrative overhead consuming a third of every healthcare dollar exists because the system is built on opacity and adversarial relationships between parties who don't trust each other. Prior authorization is a fax machine. Claims processing is a bureaucracy. The entire apparatus exists because no one can see what anyone else is doing, and no one trusts that anyone else is being honest.
Transparent, immutable records and direct CareAgent-to-CareAgent communication can streamline this infrastructure. Prior authorization becomes a conversation between agents before the order is placed. Claims processing simplifies because clinical justification is embedded in real-time documentation. The adversarial overhead dissolves — not through regulation, but through architecture.
Healthcare AI cannot be a black box. This is not an ideological position. It is a structural requirement.
If a provider bears personal liability for every action their CareAgent takes, they need to be able to see exactly what it is doing and how. The institution credentialing that agent needs to audit its capabilities. The patient whose data flows through it needs to trust that it operates transparently. Regulators need to verify its behavior. Researchers need to validate its safety.
Proprietary, closed-source clinical AI asks providers to accept liability for a system they cannot inspect. That is not a sustainable model. It is not even a reasonable one.
CareAgent is open source because transparency is not optional when lives are at stake. Every line of code, every architectural decision, every hardening layer is visible, auditable, and improvable by the community it serves. This is how trust is built — not through marketing, but through evidence.
CareAgent is released under the Apache 2.0 license. Anyone can use, modify, and build on this framework, including for commercial purposes. The license also includes patent protections that shield the community from future claims against the code they depend on.
The architecture described here reflects our current thinking. As the project grows, as contributors bring new perspectives, and as the underlying technology evolves, so will the design. What will not change are the principles: provider ownership, patient ownership, credential-based capability gating, immutable audit trails, and open interoperability with the broader personal agent ecosystem.
CareAgent is not a fork of any agent framework. It is a clinical activation layer — a pnpm plugin package (@careagent/provider-core) that transforms any compatible personal AI agent into a credentialed, auditable, hardened clinical agent. The host platform is a dependency, not something carried inside the CareAgent repository. There is no subtractive phase and no divergent codebase. When the host platform updates, you update the host platform. CareAgent stays compatible with its plugin hooks, workspace loading, and skill framework.
At the center of the architecture is CANS.md (Care Agent Nervous System) — a single file that activates clinical mode. When CANS.md is present in the workspace, the clinical layer enforces credential validation, scope boundaries, audit logging, and runtime hardening. When absent, the agent runs as a standard personal agent. No workspace files are renamed, replaced, or conflicted with. CANS.md is purely additive.
CANS.md contains the provider's identity, credentials, scope declaration, autonomy tier configuration, hardening activation flags, and consent configuration. It is where the Irreducible Risk Hypothesis lives operationally — the provider's scope of practice, encoded as machine-readable policy.
CareAgent is designed to be platform-portable. The adapter layer insulates all clinical logic from host platform internals, routing every interaction through a stable interface rather than raw APIs. CANS.md works alongside whatever workspace format the host platform uses:
| Platform | Workspace Files | Entry Point |
|---|---|---|
| OpenClaw | SOUL.md + AGENTS.md + USER.md | @careagent/provider-core |
| AGENTS.md standard | AGENTS.md | @careagent/provider-core/standalone |
| Claude Code (CLAUDE.md) | CLAUDE.md | @careagent/provider-core/standalone |
| Library / programmatic | None | @careagent/provider-core/core |
Every action the CareAgent takes is recorded from the first interaction to AUDIT.log — a hash-chained JSONL file with tamper detection. Each entry captures the action taken, the timestamp, the clinical context, the autonomy tier under which it was performed, the provider's review status, and the outcome. Every blocked action is also recorded. Entries can be added but never modified or deleted. The hash chain means any tampering is structurally detectable.
Clinical skills sit alongside regular agent skills in the same directories and registries. They use the same SKILL.md format, the same YAML frontmatter, the same loading mechanism. The difference is entirely in the metadata — clinical skills carry careagent metadata requiring CANS activation and specific credentials to load. A neurosurgeon's CareAgent might run spine-postop-skill gated by credentials AND a general calendar-management skill with no clinical gating at all. Clinical skills are privileged capabilities earned through credentialing — not downloaded from an app store. They are version-pinned, integrity-verified, and revocable.
Multiple layers of runtime hardening operate as core liability architecture — not as an afterthought. When CANS.md activates, the plugin configures restrictive tool policies, exec approvals in allowlist mode, clinical hard rules injected into the system prompt, and a safety guard that intercepts tool invocations before execution. Every hardening layer feeds into the audit trail. A clinical agent that can be prompted into unauthorized actions has liability implications that trace directly to the provider who owns it.
The same activation mechanism works for both sides of the care relationship. A provider's CANS.md declares clinical credentials, scope of practice, and autonomy tiers. A patient's CANS.md declares patient identity, health record access rules, and consent preferences. Different content, same file, same activation path. Both sides share the same communication protocols, the same immutable record format, and the same accountability architecture. This is what makes CareAgent-to-CareAgent communication possible — and why the framework must be open.
The CareAgent organization is structured around the separation of provider and patient concerns:
| Repository | Purpose |
|---|---|
provider-core |
Clinical activation layer for provider agents — the plugin, CANS activation, audit pipeline, onboarding, adapters |
patient-core |
Clinical activation layer for patient agents — health record, consent, agent-to-agent communication |
provider-skills |
Clinical skills for provider CareAgents (chart, order, charge, perform, interpret, educate, coordinate) |
patient-skills |
Clinical skills for patient CareAgents |
patient-chart |
Patient-owned immutable health record — the append-only ledger |
neuron |
Shared types, schemas, and protocols across the CareAgent ecosystem |
axon |
Clinical platform infrastructure — agent-to-agent protocols, credentialing, skill registry |
- Zero runtime dependencies — provider-core uses only Node.js built-ins
- CANS.md as single activation file — presence activates clinical mode; absence means standard agent behavior
- Hash-chained JSONL audit trail — every action logged with tamper detection
- Adapter insulation — all platform interactions go through a stable interface, never raw APIs
- TypeBox schemas — compile-time type safety for CANS.md validation
CareAgent starts where the risk is lowest and the need is greatest: documentation.
Clinical documentation is the single largest time thief in modern healthcare, and it is also the safest place for an AI agent to begin. A documentation error is correctable. A note can be reviewed, edited, and amended before it ever reaches the chart. The provider reads what the agent wrote, and either approves it or fixes it. The feedback loop is tight, the consequences are manageable, and every encounter generates training signal that makes the agent more accurate over time.
This is where we prove the mettle of the CareAgent — in the low-risk, high-volume work that consumes providers' lives. The agent learns your clinical voice and produces notes that sound like you wrote them, because in every meaningful sense, you did.
Once trust is established through documentation, the aperture widens — carefully, and always governed by risk.
Order drafting carries higher stakes. An order can directly affect a patient. So the CareAgent drafts, but the provider approves before anything executes. The agent prepares. The human decides. Charge optimization follows a similar pattern — the rules are well-defined, the agent applies them, and periodic audit ensures accuracy.
Each step up in autonomy is earned, not assumed. The agent proves itself at one level before it is trusted with the next. This mirrors how every healthcare system already works: a new resident doesn't operate unsupervised on day one. They earn autonomy through demonstrated competence under oversight. A CareAgent earns its privileges the same way.
As provider CareAgents mature, the patient side comes into focus. A patient's CareAgent — running as a healthcare plugin within their personal agent — begins maintaining their longitudinal health record: an append-only, immutable ledger that the patient owns and controls. Clinical encounters become CareAgent-to-CareAgent conversations. The provider's documentation writes directly to the patient's record. The patient's CareAgent translates clinical language into something understandable. The monolithic EMR becomes unnecessary — because the record is not locked inside an institution. It lives with the patient, where it belongs.
Eventually, the clinical encounter becomes what it always should have been: a conversation between two human beings — a provider and a person who needs their help — each supported by a CareAgent that has done the preparation, handled the administration, and organized the information so that the humans in the room can focus on the only thing that matters.
The judgment. The relationship. The care.
The future of healthcare AI is not corporate. It is personal. It belongs to the people who provide care and the people who receive it.
CareAgent v1.0 is shipped. The foundational components are live and available:
- provider-core — Clinical activation layer for provider agents. CANS.md activation, credential-based skill gating, immutable audit trail, runtime hardening, and platform adapters.
- patient-core — Clinical activation layer for patient agents. Health record access, consent management, and agent-to-agent communication.
- patient-chart — Patient-owned immutable health record. Append-only ledger with tamper detection.
This is a public project because the idea belongs to everyone.
If you're a provider drowning in documentation, a developer who wants to build something that matters, a patient who believes your health data should belong to you, or anyone who thinks healthcare AI should be owned by the people it serves — you're welcome here.
All CareAgent documentation lives in the .github repository — the organization-wide infrastructure layer. These documents are the canonical references for the entire ecosystem.
| Document | What It Covers |
|---|---|
| ARCHITECTURE.md | Ecosystem architecture — repo relationships, CANS activation flow, Clinical Action Taxonomy, Axon handshake protocol, platform portability |
| GLOSSARY.md | Canonical terms and definitions used across all CareAgent repositories |
| llms.txt | AI-readable documentation index following the llmstxt.org specification |
| Document | What It Covers |
|---|---|
| CONTRIBUTING.md | Step-by-step contribution guide written for clinician-developers — fork workflow, DCO sign-off, conventional commits, PR process |
| DEVELOPMENT.md | TypeScript conventions, test patterns, dev environment setup, branch strategy, and new-repo checklist |
| DOCUMENTATION-STANDARD.md | AI-friendly documentation structure, formatting rules, and cross-reference conventions |
| Document | What It Covers |
|---|---|
| GOVERNANCE.md | BDFL governance model, clinical vs. technical decision paths, contributor progression, dispute resolution |
| CODE_OF_CONDUCT.md | Contributor Covenant 2.1 with healthcare-specific enforcement |
| SECURITY.md | Vulnerability reporting with clinical safety distinction and 90-day responsible disclosure |
| SUPPORT.md | Help channels and support resources |
| Document | What It Covers |
|---|---|
| GSD-METHODOLOGY.md | How the GSD human+AI project management framework was used to plan, build, and verify this repository — with metrics, phase breakdowns, and lessons learned |
- Explore the framework: careagent/provider-core
- Start a conversation: Discussions
- Report an issue: Issues
Founded on the Irreducible Risk Hypothesis
Apache 2.0 — This belongs to everyone.