A Claude Code / Cursor style agent skill that drives an LLM through an
8-stage scoping literature review pipeline. Given a Health Informatics
topic, the agent produces a Markdown draft covering exactly 8 peer-reviewed
papers.
This is a prompt-only skill. There is no Python, no API client, no
scraper. The skill is consumed by an agent harness (Claude Code, Cursor,
or any agent that supports SKILL.md plus its own web_search tool). The
agent reads SKILL.md, then sequentially loads each prompt from
prompts/ and executes the stage using its native tools.
- An agent harness with:
- Filesystem read / write tools.
- A native
web_searchtool (or equivalent).
- This repository checked out somewhere the harness can read it.
That's it. No pip install, no .env, no API keys to manage from this
side — auth happens at the harness level.
Pick the path that matches your harness.
Claude Code auto-discovers skills under ~/.claude/skills/ (user-level,
available in every project) or <project>/.claude/skills/ (project-level,
scoped to one repo). Each skill is a folder containing SKILL.md plus its
supporting files.
User-level install (Linux / macOS):
git clone https://github.com/<you>/lit-review-hinf5016.git \
~/.claude/skills/lit-review-hinf5016User-level install (Windows / PowerShell):
git clone https://github.com/<you>/lit-review-hinf5016.git `
"$env:USERPROFILE\.claude\skills\lit-review-hinf5016"Project-level install (skill only available inside one repo):
cd <your-project>
mkdir -p .claude/skills
git clone https://github.com/<you>/lit-review-hinf5016.git \
.claude/skills/lit-review-hinf5016After cloning, restart Claude Code (or open a fresh session). Verify the skill loaded with:
/skills
You should see lit-review-hinf5016 in the list with its description.
If you do not want to clone, you can also copy the folder manually — the
only requirement is that SKILL.md sits at
<skills-dir>/lit-review-hinf5016/SKILL.md and the prompts/ folder is
beside it.
Cursor does not yet have first-class skill support. The closest workflow
is to keep this repo open in a Cursor workspace and reference SKILL.md
explicitly when you start a chat:
@SKILL.md draft a scoping review on "<topic>". Save outputs under runs/<name>/.
Cursor's agent will read SKILL.md and follow the 8-stage workflow using
its built-in tools.
Any harness that can (a) read local files, (b) write local files, and
(c) call a web_search tool can run this skill. Point the agent at
SKILL.md and tell it to follow the workflow.
Once installed, ask the agent something like:
Use the
lit-review-hinf5016skill to draft a scoping review on "FHIR-based interoperability for EHR data exchange". Save outputs underruns/smoke/.
The agent will:
- Read
SKILL.mdfor the workflow. - Run stages 1–8 in order, calling
web_searchfor stages 2, 4, and 5. - Persist every stage's output under your chosen run directory.
- Produce the final draft at
runs/<your-dir>/08_literature_review.md. - Report a summary with PRISMA counts and any failed quote-span checks.
| # | Stage | Uses web_search |
|---|---|---|
| 1 | Query builder (topic → keywords + MeSH + eligibility) | no |
| 2 | Search (PubMed + Google Scholar) | yes |
| 3 | Title / abstract screening | no |
| 4 | One-round snowball (forward + backward citations) | yes |
| 5 | 4-field extraction with verbatim quote spans | yes |
| 6 | Ranking — exactly 8 papers, diversity-constrained | no |
| 7 | Thematic synthesis | no |
| 8 | Review writer — Vancouver citations by first appearance | no |
- ❌ Generate the technical report — the student writes that themselves. The skill produces the audit bundle as raw material.
- ❌ Content review or hallucination correction (verification surfaces failing quote spans; humans fix them).
- ❌ AMIA template, fonts, page-count trimming, PDF export.
- ❌ Author lists, acknowledgements, cover pages.
Every run writes these files to runs/<your-dir>/:
| File | Content |
|---|---|
00_input.json |
Topic, title, output dir, start timestamp |
01_search_plan.json |
Keywords, MeSH, boolean query, eligibility criteria |
02_candidates.json |
Raw web_search hits + the queries used |
03_screening.json |
Per-candidate include/exclude/maybe decisions |
04_snowball.json |
Forward + backward citation expansion |
05_extractions.jsonl |
4-field extractions with verbatim quote spans |
06_ranking.json |
Selected 8 + rejected with reasons + diversity notes |
07_synthesis.json |
Themes, similarity matrix, gaps |
08_literature_review.md |
Final deliverable draft |
prisma_flow.json |
PRISMA-ScR counts + quote-span verification rate |
search_log.json |
Every web_search query string with stage label |
The technical report is your writing, not the skill's. Use these files as raw material:
- §2.1 Dataset →
prisma_flow.json→ draw a PRISMA diagram. - §2.2 Methods →
prompts/*.md→ quote or paraphrase the prompts. - §2.3 Experimental settings → list the model the harness used and the
search log from
search_log.json. - §3 Results →
06_ranking.json+05_extractions.jsonl→ report included / excluded counts, discuss error analysis. Thequote_span_verified / quote_span_totalratio inprisma_flow.jsonis your hallucination rate. - §4 Discussion → discuss limitations of
web_searchreach (paywalls, abstract-only fallback), LLM hallucinations, unnatural tone.
skill/
├── SKILL.md # Agent-facing skill spec (with YAML frontmatter)
├── README.md # This file
├── prompts/ # 8 prompt templates, one per stage
│ ├── 01_query_builder.md
│ ├── 02_search.md
│ ├── 03_screen.md
│ ├── 04_snowball.md
│ ├── 05_extract.md
│ ├── 06_rank.md
│ ├── 07_synthesize.md
│ └── 08_write_review.md
└── runs/ # Outputs; one subfolder per invocation