Skip to content

open-human-authorship/standard

Repository files navigation

Open Human Authorship Standard (OHAS)

OHAS is an open standard for documenting, evaluating, and communicating claims of human authorship in textual works.

It is designed for books, essays, scripts, academic writing, competition submissions, and other defined text-based works where stakeholders want a more rigorous alternative to unsupported declarations or unreliable AI detectors.

OHAS does not claim to prove an absolute metaphysical absence of AI use. Instead, it provides a structured framework for:

  • defining the certified scope of a work
  • declaring applicable policy conditions
  • collecting and organizing evidence
  • distinguishing weak and strong assurance claims
  • supporting review, audit, and public transparency

Project Status

This repository currently contains the draft v0.1 of the OHAS specification.

At this stage, OHAS should be read as an open proposal for public review, discussion, and revision. It is not yet a formal international standard, certification program, or legally recognized assurance framework.

Why OHAS Exists

Generative AI has made authorship claims harder to interpret. A simple statement such as “written by a human” is often too vague, while detector tools are not strong enough to serve as reliable proof.

OHAS exists to offer a more credible alternative based on:

  • declared scope
  • documented workflow
  • evidence quality
  • assurance levels
  • optional audit
  • transparent governance

The goal is not to produce a single binary label, but to support a family of claims with different levels of rigor.

Core Ideas

OHAS is built around four main concepts:

1. Scope

Every OHAS claim should clearly define what part of a work is covered.

Examples:

  • full text
  • selected chapters
  • submitted competition version
  • certified manuscript edition

2. Policy Profiles

A profile defines the policy context of the claim.

Examples include:

  • no generative AI in certified scope
  • no AI in text creation but AI allowed outside scope
  • educational strict mode
  • prize / competition mode

3. Assurance Levels

An assurance level defines how strong the supporting evidence and controls are.

OHAS currently uses five draft levels:

  • D0 — Self Declaration
  • D1 — Logged Human Process
  • D2 — Controlled Workflow
  • D3 — Audited Human Authorship
  • D4 — Certified Human-Only Secure Environment

4. Evidence and Review

OHAS supports the use of:

  • manifests
  • version records
  • author declarations
  • session logs
  • environment records
  • audit reports
  • revocation and review procedures

Repository Contents

.
├─ OHAS_STANDARD.md or OHAS_STANDARD_v0.1.md
├─ README.md
├─ CONTRIBUTING.md
├─ CHANGELOG.md
├─ CODE_OF_CONDUCT.md
├─ LICENSE
├─ profiles/
├─ schemas/
├─ examples/
├─ governance/
├─ audit/
└─ docs/

Normative Materials

The following materials should be treated as the primary normative core of the project unless the repository governance model states otherwise:

  • OHAS_STANDARD.md or OHAS_STANDARD_v0.1.md
  • profiles/
  • schemas/ when formally stabilized

Informative Materials

The following materials are informative unless explicitly incorporated into the normative specification:

  • docs/
  • audit/
  • examples/
  • governance and contribution guidance files

Intended Users

OHAS may be useful for:

  • independent authors
  • publishers
  • literary prizes and competitions
  • schools and universities
  • archives and foundations
  • researchers studying authorship workflows
  • developers building tooling for provenance and certification support

Related Initiatives and How OHAS Differs

OHAS does not emerge in a vacuum. Several adjacent initiatives already address parts of the broader problem of authorship transparency, provenance, and disclosure in the age of generative AI. OHAS is intended to complement this landscape rather than ignore it.

Adjacent Initiatives

Human Authored (Authors Guild)

The Authors Guild has introduced Human Authored as a certification mark for books written by human authors rather than generated by AI. This is one of the closest public-facing initiatives in the publishing space, especially in its effort to provide readers with a recognizable signal of human authorship.

Books By People / Organic Literature

Books By People promotes certification concepts such as Organic Literature, focusing on human-created books and publisher-facing trust signals. This is also closely aligned with the cultural and editorial goals that motivate OHAS.

C2PA and Content Provenance Initiatives

Technical provenance initiatives such as C2PA and related content authenticity efforts provide important building blocks for recording origin, edits, and signed metadata about digital assets. These efforts are highly relevant to OHAS as infrastructure patterns, even though they are not specific to certifying human-only authorship of textual works.

How OHAS Differs

OHAS is designed as an open, GitHub-native standard, not merely as a badge, trademark, or centrally managed certification label.

Key differences include:

  • Open specification model
    The core standard is published as versioned Markdown and schema files in a public repository, allowing issues, pull requests, discussion, and transparent revision history.

  • Policy profiles
    OHAS distinguishes between different policy contexts such as no generative AI, no AI in certified text but AI outside scope, educational strict mode, and competition mode.

  • Assurance levels
    OHAS separates the claim from the strength of evidence, using assurance levels such as D0 through D4 rather than a single binary label.

  • Structured evidence model
    OHAS includes manifests, version records, example schemas, and audit-oriented evidence expectations.

  • Audit compatibility
    OHAS is designed so that issuers, institutions, publishers, and competition organizers can adopt lightweight or stronger review models without changing the underlying specification.

  • Physical environment compatibility
    OHAS explicitly contemplates higher-assurance controlled writing environments, including certified writing rooms, for use cases where stronger custody is required.

  • Interoperability potential
    OHAS is intended to remain compatible with future provenance, credential, and trust-layer integrations instead of being tied to one organization or closed certification program.

Positioning

OHAS should be understood as an open framework for verifiable human authorship workflows.

It is not intended to replace every adjacent initiative. A future ecosystem could reasonably include:

  • public-facing labels or marks
  • publisher-operated certification programs
  • credential issuers
  • provenance infrastructure
  • institutional audit workflows

OHAS aims to provide the shared open specification layer that such efforts could reference, extend, or implement.

Current Draft Profiles

The repository currently includes the following draft profiles:

  • P1 — No Generative AI
  • P2 — No AI in Text Creation, AI Allowed Outside Core Text
  • P3 — Human Authored with Declared AI Support Outside Core Composition
  • P4 — Educational Strict Mode
  • P5 — Prize / Competition Mode

Current Draft Assurance Levels

The repository currently includes the following draft assurance levels:

  • D0 — Self Declaration
  • D1 — Logged Human Process
  • D2 — Controlled Workflow
  • D3 — Audited Human Authorship
  • D4 — Certified Human-Only Secure Environment

These levels are intended to communicate the strength of process evidence, not to imply absolute certainty.

Certified Writing Environment Concept

OHAS explicitly allows for higher-assurance writing workflows that combine digital controls with a physical environment.

This may include a certified writing room or similar controlled setting with features such as:

  • managed workstation
  • access logging
  • device restrictions
  • controlled network configuration
  • session records
  • physical-material handling procedures

This concept is especially relevant to stronger assurance claims and to institutional, educational, prize, or archival use cases.

Design Principles

The project currently follows these design principles:

  • Transparency over absolutism
  • Open revision over opaque control
  • Evidence over speculation
  • Interoperability over vendor lock-in
  • Proportionality of controls
  • Respect for accessibility and legitimate accommodations

Contributing

Contributions are welcome.

Please read:

  • CONTRIBUTING.md
  • CODE_OF_CONDUCT.md
  • governance/maintainers.md

Useful contribution types include:

  • editorial clarification
  • terminology refinement
  • schema improvements
  • profile expansion
  • audit model critique
  • use-case documentation
  • governance proposals
  • implementation notes

Suggested Review Questions

If you are reviewing OHAS for the first time, useful questions include:

  • Are the policy profiles sufficiently clear?
  • Are the assurance levels appropriately differentiated?
  • Is the evidence model realistic?
  • Are the limits of the standard stated honestly enough?
  • Are accessibility and accommodation concerns handled properly?
  • Should any informative material be promoted to normative status?
  • Is the repository structure clear and sustainable?

Versioning and Releases

This project is expected to evolve through draft releases such as:

  • v0.1.0-draft
  • v0.2.0-draft
  • v0.9.0-draft
  • v1.0.0

Breaking changes, profile changes, schema changes, and major governance changes should be recorded in CHANGELOG.md.

Limitations

OHAS does not:

  • guarantee absolute proof of human-only creation
  • rely on AI detectors as primary proof
  • eliminate all possible off-workflow misconduct
  • replace legal advice or institutional policy
  • create certification validity by itself without an adopting issuer or governance process

OHAS is a framework for structuring claims and evidence. Its real-world value depends on how clearly it is implemented, governed, reviewed, and communicated.

Getting Started

A practical starting point for new readers is:

  1. read docs/introduction.md
  2. read OHAS_STANDARD.md or OHAS_STANDARD_v0.1.md
  3. review the profiles in profiles/
  4. inspect the schemas in schemas/
  5. look at examples/
  6. explore audit/ and docs/ for supporting material

License

This repository is currently released under the terms of the LICENSE file included in the root of the project.

Contact and Governance

For now, project contact and maintainer information should be maintained in:

  • governance/maintainers.md

A future version of the repository may introduce a more formal governance charter, working groups, and a release process.

About

Open specification for documenting and verifying human authorship workflows with clear AI-use boundaries and assurance levels.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors