Skip to content

Gonzih/research-rabbit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

research-rabbit

An MCP server that helps students do real academic research. It searches ArXiv, Semantic Scholar, OpenAlex, and PubMed simultaneously, analyzes the literature with Claude, finds research gaps, and builds argument scaffolds — all grounded in real papers with verifiable URLs.

No hallucinated citations. No made-up abstracts. Real sources, real analysis.

Quick Start

Install and build

npm install
npm run build

Configure with Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "research-rabbit": {
      "command": "node",
      "args": ["/path/to/research-rabbit/dist/index.js"],
      "env": {
        "ANTHROPIC_API_KEY": "your-api-key-here"
      }
    }
  }
}

Configure with Cursor

Add to your Cursor MCP settings:

{
  "research-rabbit": {
    "command": "node",
    "args": ["/path/to/research-rabbit/dist/index.js"],
    "env": {
      "ANTHROPIC_API_KEY": "your-api-key-here"
    }
  }
}

Run directly

ANTHROPIC_API_KEY=your-key node dist/index.js

Tools Reference

search_papers

Search multiple academic databases simultaneously and return real paper metadata.

Parameters:

  • query (string, required): Search query. Be specific — include populations, methods, outcomes.
  • sources (array, optional): Which databases to search. Options: "arxiv", "semantic_scholar", "openalex", "pubmed". Default: all four.
  • limit (number, optional): Papers to return. Range: 1–25. Default: 10.
  • yearFrom (number, optional): Earliest publication year.

Example:

{
  "query": "social media depression teenagers longitudinal",
  "sources": ["semantic_scholar", "pubmed"],
  "limit": 10,
  "yearFrom": 2018
}

Returns: Array of Paper objects with id, source, title, authors, year, abstract, citationCount, doi, url, openAccessPdfUrl.


summarize_paper

Generate a structured summary of a paper from its abstract using Claude. Identifies the main claim, methodology, key evidence, and limitations — and assesses how relevant it is to the student's topic.

Parameters:

  • paperId (string): Paper ID from search_papers results.
  • title (string): Paper title.
  • abstract (string): Full abstract text.
  • topic (string): The student's research topic (used to assess relevance).

Example:

{
  "paperId": "pubmed:38293847",
  "title": "Instagram use and adolescent depression: a longitudinal study",
  "abstract": "Background: ...",
  "topic": "mental health effects of social media on teenagers"
}

Returns: PaperSummary with mainClaim, methodology, keyEvidence[], limitations[], relevanceToTopic.


map_literature

Analyze a set of papers to produce a structured map of the field. Identifies what researchers agree on, where they disagree, methodological tensions, and how thinking has evolved over time.

Parameters:

  • papers (array, required): 2–20 Paper objects from search_papers.
  • topic (string, required): The research topic.

Returns: LiteratureMap with consensus[], debates[], methodologicalTensions[], temporalTrends[]. Each entry cites specific papers using [1], [2] notation.


find_gaps

Identify what the literature is NOT studying — where a student's original contribution can live. Categorizes gaps by type and marks confidence levels.

Parameters:

  • papers (array, required): 2–20 Paper objects.
  • topic (string, required): The research topic.

Returns:

  • gaps[]: Each gap has description, type (population/temporal/methodological/geographic/theoretical/other), confidence (high/medium/low), and rationale.
  • originalContributionAngles[]: Specific suggestions for what a student could add.

build_argument

Build a structured argument scaffold for a research paper. Maps available evidence to supporting arguments, anticipates counterarguments with rebuttals, and identifies missing evidence the student still needs.

Parameters:

  • topic (string): The research topic.
  • thesis (string): The student's thesis statement.
  • papers (array, required): 1–20 Paper objects to draw evidence from.

Returns:

  • argumentScaffold: thesisStatement, supportingArguments[] (each with claim, evidence[], paperIds[]), counterarguments[] (each with claim, rebuttal, paperIds[]), suggestedStructure[].
  • missingEvidence[]: What the student still needs to find.

format_citation

Format a paper in APA, MLA, or Chicago style using real metadata. Only formats papers returned from search_papers — never invents citations.

Parameters:

  • paper: A Paper object from search_papers.
  • style: "apa", "mla", or "chicago".

Returns: Formatted citation string.

Example output (APA):

Twenge, J., Haidt, J. (2019). This is our chance to stop social media from harming teen girls' mental health. Semantic Scholar. https://www.semanticscholar.org/paper/abc123

search_and_summarize

All-in-one research assistant. Searches all databases, summarizes the top papers, maps the literature, and finds gaps — in a single tool call. Best for getting oriented on a new topic quickly.

Parameters:

  • query (string, required): Research topic or question.
  • depth ("quick" | "thorough", optional): "quick" searches 5 papers; "thorough" searches 10 with full gap analysis. Default: "quick".

Returns: Combined object with papers[], summaries[], literatureMap, gaps.


Research Methodology Primer

What is a literature review?

A literature review is a systematic survey of what has already been written about your topic. It does three things:

  1. Maps the terrain — what has been studied, by whom, with what methods
  2. Shows the state of knowledge — what is settled, what is debated
  3. Identifies the gap — what hasn't been studied yet, where your work fits

A strong literature review is not a list of summaries. It is a synthesis that builds a case for why your research question matters and hasn't been answered yet.

How to find research gaps

A research gap is a question the literature hasn't answered. Gaps come in several types:

  • Population gaps: Studies have only examined one demographic group (e.g., only adults, only Americans)
  • Temporal gaps: The phenomenon hasn't been studied recently, or long-term effects are unknown
  • Methodological gaps: Most studies use surveys; no one has done qualitative interviews, or vice versa
  • Geographic gaps: Findings from Western countries may not apply elsewhere
  • Theoretical gaps: Existing frameworks don't explain a phenomenon well
  • Interaction gaps: Two variables have been studied separately but never together

Use find_gaps to identify these automatically, then verify by checking whether the gap is actually present in the literature.

How to build a research argument

A research argument has four parts:

  1. Thesis: Your central claim (what you are arguing is true)
  2. Warrant: Why the claim matters (why anyone should care)
  3. Evidence: Papers that support your claim
  4. Counterargument: The strongest objection to your thesis, and your response

Use build_argument to generate a scaffold, then fill it in with your own analysis. The tool will tell you what evidence is missing so you know what to search for next.


Example Workflow: Mental Health and Social Media

Step 1: Get oriented

search_and_summarize(
  query="social media mental health adolescents depression anxiety",
  depth="thorough"
)

This returns 10 papers, summarizes each, maps the literature, and identifies gaps. You now know the main debates (correlational vs. causal evidence, passive vs. active use) and can see which populations are understudied.

Step 2: Go deeper on a specific angle

The gap analysis suggests that "effects on boys vs. girls" is understudied. Search specifically:

search_papers(
  query="social media depression gender differences adolescents boys girls",
  sources=["pubmed", "semantic_scholar"],
  limit=10,
  yearFrom=2015
)

Step 3: Summarize key papers

For each paper with a substantial abstract:

summarize_paper(
  paperId="pubmed:38293847",
  title="...",
  abstract="...",
  topic="gender differences in social media effects on adolescent mental health"
)

Step 4: Map the gender-specific literature

map_literature(
  papers=[...the papers from Step 2...],
  topic="gender differences in social media and adolescent mental health"
)

Step 5: Draft your thesis and build an argument

build_argument(
  topic="social media and adolescent mental health",
  thesis="Passive consumption of Instagram harms girls' self-esteem more than boys' due to appearance-related social comparison, while active social use affects both genders equally",
  papers=[...all collected papers...]
)

The output shows which papers support each sub-claim and which evidence is still missing (e.g., "experimental studies manipulating active vs. passive use by gender").

Step 6: Format citations for your reference list

format_citation(paper={...}, style="apa")

Responsible Use

What this tool guarantees

  • All papers are fetched from real academic APIs in real time
  • Every paper includes a verifiable URL
  • Gap confidence levels are explicit (high/medium/low)
  • Analysis is labeled as AI interpretation, not ground truth

What you should always do

  • Click the url of any paper you cite to verify it exists and says what the summary claims
  • For important arguments, read the full paper, not just the abstract
  • Cross-check citations against official style guides before final submission
  • Treat confidence: "low" gaps as hypotheses to investigate, not established facts

Academic integrity

This tool helps you find and analyze real literature faster. It does not write your paper for you, and it never should. The synthesis, argument, and original analysis must be yours. Using AI to fabricate sources is academic dishonesty; using AI to find and understand real sources is good research practice.


API Sources

Database Coverage Best for
ArXiv CS, Math, Physics, Economics preprints STEM, cutting-edge research
Semantic Scholar Cross-disciplinary, 200M+ papers Citation counts, open access
OpenAlex Cross-disciplinary, 250M+ works Social sciences, humanities
PubMed Biomedical and life sciences Medicine, clinical research

All APIs are free and do not require authentication for basic usage. Semantic Scholar may rate-limit heavy usage; the server handles this gracefully by returning results from other sources.


Development

# Install dependencies
npm install

# Run in development (no build step)
npm run dev

# Build
npm run build

# Run built version
npm start

Requirements: Node.js 18+, ANTHROPIC_API_KEY environment variable.

About

MCP server for student research — ArXiv, Semantic Scholar, CrossRef integration, gap finder, argument builder

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors