Skip to content

ShinyGua/AutoArtsResearch

Repository files navigation

AutoArtsResearch

Multi-Agent Academic Research System for Arts & Social Sciences

Automated research pipeline · Web-based topic scoping · LLM-scored peer review · PDF & Word export

Version Platform Python

中文版


DEMO NOTICE

This is a demonstration project showcasing how Claude Code plugins can automate academic research workflows. The papers it produces are AI-generated drafts for reference only — they have NOT been peer-reviewed by human experts.

Please verify all content, citations, and arguments before using any output in real academic work. AI-generated text may contain hallucinated references, unsupported claims, or factual errors. Always conduct your own research and consult domain experts before submission.


What is AutoArtsResearch?

AutoArtsResearch is an automated academic research pipeline built as a Claude Code plugin. It guides users through a structured workflow — from topic selection to a submission-ready manuscript — with a focus on arts and social sciences disciplines.

Key Features

  • Structured Research Pipeline: 9-stage workflow (bootstrap → scoping → corpus → framing → evidence → argument → drafting → review → export) with human approval gates
  • Three Research Tracks: Literature review, policy analysis, and comparative case study — cumulative by design (Track C includes A + B)
  • Arts & Social Sciences Voice: Writing style tuned for humanities disciplines — interpretive headings, discursive prose, thematic framing rather than STEM-style methodology
  • LLM Peer Review: 6-dimension scoring (rigor, evidence, citations, method fit, coherence, contribution) with iterative revision loop
  • Multi-Format Export: Generates both PDF and Word (.docx) with academic formatting
  • Web Viewer: Browse projects, read papers in the browser, and download exports via a local Flask server
  • Human-in-the-Loop: 5 mandatory approval gates ensure the researcher stays in control

Quick Start

Prerequisites

  • Claude Code CLI installed
  • Python 3.10+
  • macOS, Linux, or WSL

Install Claude Code

# Install Claude Code CLI (requires Node.js 18+)
npm install -g @anthropic-ai/claude-code

See the official installation guide for details and authentication setup.

Setup

# 1. Clone the repository
git clone https://github.com/YourUsername/AutoArtsResearch.git
cd AutoArtsResearch

# 2. Create the Python environment
source setup.sh

# 3. Launch Claude Code with the plugin
claude --plugin-dir ./plugin

Usage

/ar:init                          # Start a new research project (interactive)
/ar:run workspaces/{PROJECT_ID}   # Resume the pipeline from current state
/ar:status                        # Check all project statuses
/ar:web                           # Launch the web viewer (http://localhost:5050)

The /ar:init flow:

  1. Asks for your research topic
  2. Asks which track to use (literature review / policy analysis / comparative case study)
  3. Creates a workspace and auto-launches the pipeline
  4. Runs through scoping → writing → review → export with human gates

Pipeline Architecture

/ar:init
    |
    v
  Ask research topic + select track
    |
    v
  Create workspace
    |
    v
  Stage 1: Scoping (WebSearch-based topic refinement)
    |
    v
  [GATE 1] User approves research question & scope
    |
    v
  Stage 2: Corpus Building (15-25 web searches, source audit)
    |
    v
  [GATE 2] User approves source corpus
    |
    v
  Stage 3: Framing (literature synthesis, theory, method card)
    |
    v
  [GATE 3] User approves theory & method
    |
    v
  Stage 4: Evidence (source reading, evidence unit extraction)
    |
    v
  Stage 5: Argument (claim construction, argument tree)
    |
    v
  [GATE 4] User approves argument structure
    |
    v
  Stage 6: Drafting (paper from claims + reference verification)
    |
    v
  Stage 7: Review (6-dimension scoring + revision loop)
    |
    v
  Stage 8: Export (PDF + Word)
    |
    v
  [GATE 5] Done — view in web viewer or download files

Research Tracks (Cumulative)

Track Includes Best For
A: Literature Review Thematic synthesis, gap analysis Emerging fields, interdisciplinary topics
B: Policy Analysis A + policy document analysis, stakeholder mapping Government reports, institutional statements
C: Comparative Case Study A + B + cross-case comparison 2-6 cases, regional analysis

Review Dimensions

Dimension What It Measures
Rigor Logical soundness, analytical depth
Evidence Coverage Breadth and relevance of sources
Citation Quality Proper attribution, source reliability
Methodological Fit Alignment between method and question
Coherence Structural flow, argument consistency
Contribution Originality, significance to the field

Pass criteria: overall >= 7.0, rigor >= 8.0, citations >= 8.0.


Project Structure

AutoArtsResearch/
├── .claude/
│   ├── agents/                    # Model tier definitions
│   │   ├── ar-heavy.md            #   Opus — orchestration, debate, review
│   │   ├── ar-standard.md         #   Opus — scoping, writing, analysis
│   │   └── ar-light.md            #   Sonnet — retrieval, formatting
│   └── skills/                    # 18 agent skills
│       ├── ar-init/               #   Interactive project setup
│       ├── ar-orchestrator/       #   Pipeline driver (9 stages, 5 gates)
│       ├── ar-scoping/            #   Topic scoping & research question
│       ├── ar-retrieval/          #   Corpus retrieval (academic, policy, media)
│       ├── ar-source-auditor/     #   Source reliability scoring & audit
│       ├── ar-lit-synthesis/      #   Literature clustering & theme mapping
│       ├── ar-theory/             #   Theoretical framework proposal
│       ├── ar-method/             #   Method card generation
│       ├── ar-reader/             #   Per-source reading & note extraction
│       ├── ar-evidence/           #   Evidence unit construction
│       ├── ar-claim-builder/      #   Claim construction & argument tree
│       ├── ar-debate/             #   Multi-role structured debate
│       ├── ar-research-writer/    #   Academic paper drafting
│       ├── ar-ref-checker/        #   Reference verification
│       ├── ar-citation-verifier/  #   Citation faithfulness check
│       ├── ar-paper-reviewer/     #   6-dimension quality scoring
│       ├── ar-pdf-exporter/       #   PDF + Word export
│       └── ar-progress-monitor/   #   Background progress tracking
├── plugin/
│   ├── .claude-plugin/            # Plugin metadata
│   └── commands/                  # /ar:init, /ar:run, /ar:status, /ar:web
├── utils/
│   ├── schemas.py                 # Pydantic models, state machine, ID generation
│   ├── md_to_pdf.py               # Markdown → academic PDF (fpdf2)
│   └── md_to_docx.py             # Markdown → Word document (python-docx)
├── web/
│   ├── app.py                     # Flask web viewer (port 5050)
│   ├── templates/                 # HTML templates
│   └── static/                    # CSS styles
├── workspaces/                    # Research project workspaces (gitignored)
├── config.example.yaml            # Configuration template
├── setup.sh                       # Environment setup script
└── CLAUDE.md                      # System prompt & pipeline documentation

Workspace Structure

workspaces/{PROJECT_ID}/
├── config.yaml                    # Project config (topic, track)
├── status.json                    # Pipeline state tracker
├── sources/
│   ├── academic/                  # Academic source records
│   ├── policy/                    # Policy source records
│   ├── media/                     # Media source records
│   └── audit/                     # Source audit reports
├── analysis/
│   ├── scoping/                   # Scoping report & summary
│   ├── literature_map/            # Thematic clusters & theory proposals
│   ├── method_cards/              # Method card & summary
│   ├── evidence/                  # Evidence units & reading notes
│   ├── claims/                    # Claim nodes & argument tree
│   └── debate_logs/               # Debate transcripts
├── drafts/
│   └── research_paper.md          # Paper draft
├── reviews/
│   ├── review_summary.md          # Review scores & feedback
│   ├── ref_check_report.json      # Reference verification
│   └── citation_verify_report.json # Citation faithfulness
├── final/
│   └── research_paper.md          # Approved final paper
└── exports/
    ├── research_paper.pdf         # PDF export
    └── research_paper.docx        # Word export

Web Viewer

Launch the built-in web viewer to browse research projects:

/ar:web

Opens at http://localhost:5050 with:

  • Project list with status and track badges
  • In-browser paper reading (rendered markdown)
  • PDF and Word download buttons

TODO

  • Bibliography Generation: Automated BibTeX/reference list from cited sources
  • LaTeX Export: Export to LaTeX format for journal submission
  • Automated Scheduling: Cron-based pipeline runs for ongoing research projects
  • Vector Store: Semantic search over evidence units and source corpus

Authors


License

MIT License


Built with Claude Code

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors