Skip to content

Ruacha00/LiteratureReviewSkill

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Literature Review Skill

A Claude Code / Cursor style agent skill that drives an LLM through an 8-stage scoping literature review pipeline. Given a Health Informatics topic, the agent produces a Markdown draft covering exactly 8 peer-reviewed papers. This is a prompt-only skill. There is no Python, no API client, no scraper. The skill is consumed by an agent harness (Claude Code, Cursor, or any agent that supports SKILL.md plus its own web_search tool). The agent reads SKILL.md, then sequentially loads each prompt from prompts/ and executes the stage using its native tools.

What you need

  • An agent harness with:
    • Filesystem read / write tools.
    • A native web_search tool (or equivalent).
  • This repository checked out somewhere the harness can read it.

That's it. No pip install, no .env, no API keys to manage from this side — auth happens at the harness level.

Installation

Pick the path that matches your harness.

Claude Code (recommended)

Claude Code auto-discovers skills under ~/.claude/skills/ (user-level, available in every project) or <project>/.claude/skills/ (project-level, scoped to one repo). Each skill is a folder containing SKILL.md plus its supporting files.

User-level install (Linux / macOS):

git clone https://github.com/<you>/lit-review-hinf5016.git \
  ~/.claude/skills/lit-review-hinf5016

User-level install (Windows / PowerShell):

git clone https://github.com/<you>/lit-review-hinf5016.git `
  "$env:USERPROFILE\.claude\skills\lit-review-hinf5016"

Project-level install (skill only available inside one repo):

cd <your-project>
mkdir -p .claude/skills
git clone https://github.com/<you>/lit-review-hinf5016.git \
  .claude/skills/lit-review-hinf5016

After cloning, restart Claude Code (or open a fresh session). Verify the skill loaded with:

/skills

You should see lit-review-hinf5016 in the list with its description.

If you do not want to clone, you can also copy the folder manually — the only requirement is that SKILL.md sits at <skills-dir>/lit-review-hinf5016/SKILL.md and the prompts/ folder is beside it.

Cursor

Cursor does not yet have first-class skill support. The closest workflow is to keep this repo open in a Cursor workspace and reference SKILL.md explicitly when you start a chat:

@SKILL.md draft a scoping review on "<topic>". Save outputs under runs/<name>/.

Cursor's agent will read SKILL.md and follow the 8-stage workflow using its built-in tools.

Other agent harnesses

Any harness that can (a) read local files, (b) write local files, and (c) call a web_search tool can run this skill. Point the agent at SKILL.md and tell it to follow the workflow.

How to invoke

Once installed, ask the agent something like:

Use the lit-review-hinf5016 skill to draft a scoping review on "FHIR-based interoperability for EHR data exchange". Save outputs under runs/smoke/.

The agent will:

  1. Read SKILL.md for the workflow.
  2. Run stages 1–8 in order, calling web_search for stages 2, 4, and 5.
  3. Persist every stage's output under your chosen run directory.
  4. Produce the final draft at runs/<your-dir>/08_literature_review.md.
  5. Report a summary with PRISMA counts and any failed quote-span checks.

What the skill does

# Stage Uses web_search
1 Query builder (topic → keywords + MeSH + eligibility) no
2 Search (PubMed + Google Scholar) yes
3 Title / abstract screening no
4 One-round snowball (forward + backward citations) yes
5 4-field extraction with verbatim quote spans yes
6 Ranking — exactly 8 papers, diversity-constrained no
7 Thematic synthesis no
8 Review writer — Vancouver citations by first appearance no

What the skill does NOT do

  • ❌ Generate the technical report — the student writes that themselves. The skill produces the audit bundle as raw material.
  • ❌ Content review or hallucination correction (verification surfaces failing quote spans; humans fix them).
  • ❌ AMIA template, fonts, page-count trimming, PDF export.
  • ❌ Author lists, acknowledgements, cover pages.

Audit bundle

Every run writes these files to runs/<your-dir>/:

File Content
00_input.json Topic, title, output dir, start timestamp
01_search_plan.json Keywords, MeSH, boolean query, eligibility criteria
02_candidates.json Raw web_search hits + the queries used
03_screening.json Per-candidate include/exclude/maybe decisions
04_snowball.json Forward + backward citation expansion
05_extractions.jsonl 4-field extractions with verbatim quote spans
06_ranking.json Selected 8 + rejected with reasons + diversity notes
07_synthesis.json Themes, similarity matrix, gaps
08_literature_review.md Final deliverable draft
prisma_flow.json PRISMA-ScR counts + quote-span verification rate
search_log.json Every web_search query string with stage label

Using the audit bundle for your technical report

The technical report is your writing, not the skill's. Use these files as raw material:

  • §2.1 Datasetprisma_flow.json → draw a PRISMA diagram.
  • §2.2 Methodsprompts/*.md → quote or paraphrase the prompts.
  • §2.3 Experimental settings → list the model the harness used and the search log from search_log.json.
  • §3 Results06_ranking.json + 05_extractions.jsonl → report included / excluded counts, discuss error analysis. The quote_span_verified / quote_span_total ratio in prisma_flow.json is your hallucination rate.
  • §4 Discussion → discuss limitations of web_search reach (paywalls, abstract-only fallback), LLM hallucinations, unnatural tone.

Directory structure

skill/
├── SKILL.md              # Agent-facing skill spec (with YAML frontmatter)
├── README.md             # This file
├── prompts/              # 8 prompt templates, one per stage
│   ├── 01_query_builder.md
│   ├── 02_search.md
│   ├── 03_screen.md
│   ├── 04_snowball.md
│   ├── 05_extract.md
│   ├── 06_rank.md
│   ├── 07_synthesize.md
│   └── 08_write_review.md
└── runs/                 # Outputs; one subfolder per invocation

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors