A twelve-session microcredential handbook —
objectives, playtest, evidence, credential.
What it is · In the handbook · Technical niches · Architecture · Quick start · Design notes
A self-contained, static handbook for The University of Alabama's AI-enhanced Educational Game Design microcredential (v2). Twelve weekly sessions, each pairing one learning objective with one designed artifact. Five non-compensatory deliverables, 25 rubric criteria, one Proficient floor.
The repository is also a reference implementation of a credential stack — Open Badges 3.0 / W3C Verifiable Credentials 2.0 / CLR 2.0 / xAPI — so the same scaffolding can be lifted into other competency-based programs without re-deriving the spec choices.
Specifications chosen on purpose. Each one does work the next one cannot.
|
Assertions conform to See |
All 25 rubric criteria are mapped to ESCO v1.1.1 and Lightcast Open Skills URIs. The badge's |
Whole-portfolio transcripts as a See |
|
Cohort heatmap, completion funnel, Cohen's κ inter-rater reliability, and a pre/post eight-skill self-assessment stream with growth deltas — rendered against the same taxonomy the badge alignment uses. See |
One Developing on any criterion blocks the deliverable. |
Third-party endorsers (external faculty, industry partners) sign their own |
Design notes shipped alongside, not code. LTI 1.3 production wiring, Credential Engine / CTDL registry submission, AI-assisted skill tagging (prompt template v0.3 with negative-space signal), and a Kirkpatrick × CIPP program-evaluation framework — see
docs/.
Deliberately boring stack. Static HTML / CSS / vanilla JS, no build step for the shell, no server component, no user analytics. Hosted on Cloudflare Pages at teachplay.dev. Two interactive labs are self-contained SCORM packages under minigames/.
Learner browser Issuer (durable URL)
──────────────── ─────────────────────
index.html → shell.js → localStorage teachplay.dev/credential/
│ ├── issuer-v3.json
├── xapi.js ──────── xAPI 1.0.3 statements ├── badge-class-v3.json
│ (local queue) ├── assertion-example-v3.json
├── analytics.html ── κ · funnel · heatmap ├── endorsement-template-v3.json
│ · skills growth chart └── skills-crosswalk.json
├── clr.js ───────── CLR 2.0 portfolio export │
└── role.js ──────── student / instructor surface ▼
ESCO + Lightcast
(taxonomy refs)
# clone
git clone https://github.com/Educatian/TeachPlay.git
cd TeachPlay
# serve (any static server; python is zero-dep)
python -m http.server 8099Open http://localhost:8099/. The preview launch config at .claude/launch.json starts the same server for in-editor preview.
To deploy your own instance: push to a fork, connect the fork to Cloudflare Pages (or any static host), and rewrite the canonical URL references in credential/*.json and the xAPI extension IRIs to your domain.
index.html Landing: hero, sessions grid, deliverables
session-01.html … 12 session pages — one LO, one artifact each
session-12.html
rubrics.html 25-criterion rubric, non-compensatory, AI-tagging stub
alignment.html Crosswalk: objectives ↔ deliverables ↔ skill chips
credential.html Public credential page (OBv3, stacking, endorsement)
analytics.html Instructor analytics — κ, funnel, skills growth
facilitator.html Facilitator-only views
resources.html Resource library + #labs section
read.html Markdown viewer for companion handouts
handbook.css Shared styles (inc. @media print)
shell.js Sidebar, checklists, tickets, self-assessment wiring
xapi.js xAPI 1.0.3 emitter
clr.js CLR 2.0 export adapter
role.js Student ⇄ Instructor role toggle
minigames.js Inline minigame injector
credential/ Open Badges + VC + endorsement + crosswalk (JSON-LD)
docs/ Design notes — CTDL, AI tagging, LTI, evaluation
resources/ 12 markdown handouts (companion library)
minigames/ SCORM labs — orbit-sum-lab, electric-circuit-lab
screenshots/ Handbook screenshots (used in this README)
og-image.svg / .png Social preview card (1200 × 630)
What the badge means and how it's shaped
- Achievement type:
BadgewithcreditsAvailable: 3 - Alignment block: ISTE · UDL · NETP + ESCO + Lightcast — one per criterion (25 total)
- Evidence:
credentialSubject.result[]carries one block per deliverable D1–D5 with rubric rollup - Proof placeholder:
DataIntegrityProofwithcryptosuite: eddsa-rdfc-2022, proofPurposeassertionMethod - Dual publishing: OBv2 (
credential/badge-class.json) and OBv3 (credential/badge-class-v3.json) live side-by-side until consumers migrate
Analytics that instructors actually use
- Cohort heatmap — which rubric criteria get Developing calls most often
- Completion funnel — entered / attempted / completed / abandoned per session
- Cohen's κ — inter-rater reliability on double-rated deliverables, with calibration trigger at κ < 0.70
- Skills growth — pre/post 8-skill Likert delta per cohort, rendered as bars with Δ
- CLR export — whole-cohort or per-learner, as
ClrCredentialJSON
Persistence and privacy
All learner progress is stored in localStorage on the visitor's browser:
hb:s{n}:chk:{id}— checklist item state per sessionhb:s{n}:ticket:{id}— exit-ticket textarea contenthb:done— set of sessions marked completehb:self-assess:{pre|post}:{skill}— 0–4 Likert self-ratingshb:xapi:queue— xAPI statement queuehb:xapi:actor— pseudonymous actor (UUID, browser-local)
No server component, no third-party analytics, no identifying data leaves the browser in the reference implementation. Clearing site data resets progress.
Under-building on purpose; each note names the decisions before any code ships.
| Note | What it covers |
|---|---|
docs/L1-ai-skill-tagging-design.md |
Prompt template (v0.3) for AI-assisted skill extraction. Evidence-linked tags, negative-space "claimed but not shown" signal, guardrails against over-reliance. |
docs/L1-credential-engine-registration.md |
CTDL resources to publish (ceterms:MicroCredential, ceasn:CompetencyFramework, …), field-by-field mapping, submission workflow. |
docs/L2-lti-1.3-design.md |
LTI 1.3 integration plan — OIDC launch, Assignment and Grade Services (AGS), Names and Role Provisioning (NRPS), role claim enforcement, estimated effort. |
docs/L3-evaluation-plan.md |
Program evaluation framework — Kirkpatrick × CIPP, EQ1–EQ5 (skill acquisition, inter-rater reliability, equity subgroups, transfer, external validity), IRB posture. |
| Session | Lab | Role |
|---|---|---|
| S3 · Objectives and crosswalk | Orbit Sum Lab (SCORM, React/Vite) | Worked crosswalk case |
| S8 · Interaction spec | Electric Circuit Lab (SCORM, Three.js) | Reference implementation |
| S10 · Audit | both labs | Audit subjects — accessibility, QA, data |
resources.html#labs |
both labs | Library entry |
Add a new handout
- Drop
resources/NN-slug.mdinto theresources/folder. - Add a
.resourcecard to the relevant session's Companion reading section with:href="read.html?doc=NN-slug.md"on the Read buttonhref="resources/NN-slug.md" downloadon the Download button
- Add a matching entry to the
resources.htmllibrary grid.
Add a new lab
- Unpack the SCORM package into
minigames/<slug>/. - Pick the session whose learning objective the lab exemplifies.
- Add a
.resourcecard inside that session's#readingblock — use theA(green) orB(blue) badge convention established in S3/S8/S10. - Also add a card to
resources.html#labsso the library stays complete.
Each session page has a Print / PDF button. The print stylesheet at handbook.css:1886 hides sidebar / toolbar / nav, expands content to full width, and avoids page breaks inside blocks.
The University of Alabama, College of Education. Non-compensatory rubric design informed by the competency-based assessment literature. Credential specifications draw on the 1EdTech Open Badges 3.0 / CLR 2.0 working groups, W3C Verifiable Credentials 2.0, Credential Engine's CTDL vocabulary, and the Digital Credentials Consortium's guidance on decentralized issuer binding.
Built slowly, on purpose.



