OHAS is an open standard for documenting, evaluating, and communicating claims of human authorship in textual works.
It is designed for books, essays, scripts, academic writing, competition submissions, and other defined text-based works where stakeholders want a more rigorous alternative to unsupported declarations or unreliable AI detectors.
OHAS does not claim to prove an absolute metaphysical absence of AI use. Instead, it provides a structured framework for:
- defining the certified scope of a work
- declaring applicable policy conditions
- collecting and organizing evidence
- distinguishing weak and strong assurance claims
- supporting review, audit, and public transparency
This repository currently contains the draft v0.1 of the OHAS specification.
At this stage, OHAS should be read as an open proposal for public review, discussion, and revision. It is not yet a formal international standard, certification program, or legally recognized assurance framework.
Generative AI has made authorship claims harder to interpret. A simple statement such as “written by a human” is often too vague, while detector tools are not strong enough to serve as reliable proof.
OHAS exists to offer a more credible alternative based on:
- declared scope
- documented workflow
- evidence quality
- assurance levels
- optional audit
- transparent governance
The goal is not to produce a single binary label, but to support a family of claims with different levels of rigor.
OHAS is built around four main concepts:
Every OHAS claim should clearly define what part of a work is covered.
Examples:
- full text
- selected chapters
- submitted competition version
- certified manuscript edition
A profile defines the policy context of the claim.
Examples include:
- no generative AI in certified scope
- no AI in text creation but AI allowed outside scope
- educational strict mode
- prize / competition mode
An assurance level defines how strong the supporting evidence and controls are.
OHAS currently uses five draft levels:
D0— Self DeclarationD1— Logged Human ProcessD2— Controlled WorkflowD3— Audited Human AuthorshipD4— Certified Human-Only Secure Environment
OHAS supports the use of:
- manifests
- version records
- author declarations
- session logs
- environment records
- audit reports
- revocation and review procedures
.
├─ OHAS_STANDARD.md or OHAS_STANDARD_v0.1.md
├─ README.md
├─ CONTRIBUTING.md
├─ CHANGELOG.md
├─ CODE_OF_CONDUCT.md
├─ LICENSE
├─ profiles/
├─ schemas/
├─ examples/
├─ governance/
├─ audit/
└─ docs/
The following materials should be treated as the primary normative core of the project unless the repository governance model states otherwise:
OHAS_STANDARD.mdorOHAS_STANDARD_v0.1.mdprofiles/schemas/when formally stabilized
The following materials are informative unless explicitly incorporated into the normative specification:
docs/audit/examples/- governance and contribution guidance files
OHAS may be useful for:
- independent authors
- publishers
- literary prizes and competitions
- schools and universities
- archives and foundations
- researchers studying authorship workflows
- developers building tooling for provenance and certification support
OHAS does not emerge in a vacuum. Several adjacent initiatives already address parts of the broader problem of authorship transparency, provenance, and disclosure in the age of generative AI. OHAS is intended to complement this landscape rather than ignore it.
The Authors Guild has introduced Human Authored as a certification mark for books written by human authors rather than generated by AI. This is one of the closest public-facing initiatives in the publishing space, especially in its effort to provide readers with a recognizable signal of human authorship.
Books By People promotes certification concepts such as Organic Literature, focusing on human-created books and publisher-facing trust signals. This is also closely aligned with the cultural and editorial goals that motivate OHAS.
Technical provenance initiatives such as C2PA and related content authenticity efforts provide important building blocks for recording origin, edits, and signed metadata about digital assets. These efforts are highly relevant to OHAS as infrastructure patterns, even though they are not specific to certifying human-only authorship of textual works.
OHAS is designed as an open, GitHub-native standard, not merely as a badge, trademark, or centrally managed certification label.
Key differences include:
-
Open specification model
The core standard is published as versioned Markdown and schema files in a public repository, allowing issues, pull requests, discussion, and transparent revision history. -
Policy profiles
OHAS distinguishes between different policy contexts such as no generative AI, no AI in certified text but AI outside scope, educational strict mode, and competition mode. -
Assurance levels
OHAS separates the claim from the strength of evidence, using assurance levels such as D0 through D4 rather than a single binary label. -
Structured evidence model
OHAS includes manifests, version records, example schemas, and audit-oriented evidence expectations. -
Audit compatibility
OHAS is designed so that issuers, institutions, publishers, and competition organizers can adopt lightweight or stronger review models without changing the underlying specification. -
Physical environment compatibility
OHAS explicitly contemplates higher-assurance controlled writing environments, including certified writing rooms, for use cases where stronger custody is required. -
Interoperability potential
OHAS is intended to remain compatible with future provenance, credential, and trust-layer integrations instead of being tied to one organization or closed certification program.
OHAS should be understood as an open framework for verifiable human authorship workflows.
It is not intended to replace every adjacent initiative. A future ecosystem could reasonably include:
- public-facing labels or marks
- publisher-operated certification programs
- credential issuers
- provenance infrastructure
- institutional audit workflows
OHAS aims to provide the shared open specification layer that such efforts could reference, extend, or implement.
The repository currently includes the following draft profiles:
P1— No Generative AIP2— No AI in Text Creation, AI Allowed Outside Core TextP3— Human Authored with Declared AI Support Outside Core CompositionP4— Educational Strict ModeP5— Prize / Competition Mode
The repository currently includes the following draft assurance levels:
D0— Self DeclarationD1— Logged Human ProcessD2— Controlled WorkflowD3— Audited Human AuthorshipD4— Certified Human-Only Secure Environment
These levels are intended to communicate the strength of process evidence, not to imply absolute certainty.
OHAS explicitly allows for higher-assurance writing workflows that combine digital controls with a physical environment.
This may include a certified writing room or similar controlled setting with features such as:
- managed workstation
- access logging
- device restrictions
- controlled network configuration
- session records
- physical-material handling procedures
This concept is especially relevant to stronger assurance claims and to institutional, educational, prize, or archival use cases.
The project currently follows these design principles:
- Transparency over absolutism
- Open revision over opaque control
- Evidence over speculation
- Interoperability over vendor lock-in
- Proportionality of controls
- Respect for accessibility and legitimate accommodations
Contributions are welcome.
Please read:
CONTRIBUTING.mdCODE_OF_CONDUCT.mdgovernance/maintainers.md
Useful contribution types include:
- editorial clarification
- terminology refinement
- schema improvements
- profile expansion
- audit model critique
- use-case documentation
- governance proposals
- implementation notes
If you are reviewing OHAS for the first time, useful questions include:
- Are the policy profiles sufficiently clear?
- Are the assurance levels appropriately differentiated?
- Is the evidence model realistic?
- Are the limits of the standard stated honestly enough?
- Are accessibility and accommodation concerns handled properly?
- Should any informative material be promoted to normative status?
- Is the repository structure clear and sustainable?
This project is expected to evolve through draft releases such as:
v0.1.0-draftv0.2.0-draftv0.9.0-draftv1.0.0
Breaking changes, profile changes, schema changes, and major governance changes should be recorded in CHANGELOG.md.
OHAS does not:
- guarantee absolute proof of human-only creation
- rely on AI detectors as primary proof
- eliminate all possible off-workflow misconduct
- replace legal advice or institutional policy
- create certification validity by itself without an adopting issuer or governance process
OHAS is a framework for structuring claims and evidence. Its real-world value depends on how clearly it is implemented, governed, reviewed, and communicated.
A practical starting point for new readers is:
- read
docs/introduction.md - read
OHAS_STANDARD.mdorOHAS_STANDARD_v0.1.md - review the profiles in
profiles/ - inspect the schemas in
schemas/ - look at
examples/ - explore
audit/anddocs/for supporting material
This repository is currently released under the terms of the LICENSE file included in the root of the project.
For now, project contact and maintainer information should be maintained in:
governance/maintainers.md
A future version of the repository may introduce a more formal governance charter, working groups, and a release process.