An MCP server that helps students do real academic research. It searches ArXiv, Semantic Scholar, OpenAlex, and PubMed simultaneously, analyzes the literature with Claude, finds research gaps, and builds argument scaffolds — all grounded in real papers with verifiable URLs.
No hallucinated citations. No made-up abstracts. Real sources, real analysis.
npm install
npm run buildAdd to your claude_desktop_config.json:
{
"mcpServers": {
"research-rabbit": {
"command": "node",
"args": ["/path/to/research-rabbit/dist/index.js"],
"env": {
"ANTHROPIC_API_KEY": "your-api-key-here"
}
}
}
}Add to your Cursor MCP settings:
{
"research-rabbit": {
"command": "node",
"args": ["/path/to/research-rabbit/dist/index.js"],
"env": {
"ANTHROPIC_API_KEY": "your-api-key-here"
}
}
}ANTHROPIC_API_KEY=your-key node dist/index.jsSearch multiple academic databases simultaneously and return real paper metadata.
Parameters:
query(string, required): Search query. Be specific — include populations, methods, outcomes.sources(array, optional): Which databases to search. Options:"arxiv","semantic_scholar","openalex","pubmed". Default: all four.limit(number, optional): Papers to return. Range: 1–25. Default: 10.yearFrom(number, optional): Earliest publication year.
Example:
{
"query": "social media depression teenagers longitudinal",
"sources": ["semantic_scholar", "pubmed"],
"limit": 10,
"yearFrom": 2018
}Returns: Array of Paper objects with id, source, title, authors, year, abstract, citationCount, doi, url, openAccessPdfUrl.
Generate a structured summary of a paper from its abstract using Claude. Identifies the main claim, methodology, key evidence, and limitations — and assesses how relevant it is to the student's topic.
Parameters:
paperId(string): Paper ID fromsearch_papersresults.title(string): Paper title.abstract(string): Full abstract text.topic(string): The student's research topic (used to assess relevance).
Example:
{
"paperId": "pubmed:38293847",
"title": "Instagram use and adolescent depression: a longitudinal study",
"abstract": "Background: ...",
"topic": "mental health effects of social media on teenagers"
}Returns: PaperSummary with mainClaim, methodology, keyEvidence[], limitations[], relevanceToTopic.
Analyze a set of papers to produce a structured map of the field. Identifies what researchers agree on, where they disagree, methodological tensions, and how thinking has evolved over time.
Parameters:
papers(array, required): 2–20 Paper objects fromsearch_papers.topic(string, required): The research topic.
Returns: LiteratureMap with consensus[], debates[], methodologicalTensions[], temporalTrends[]. Each entry cites specific papers using [1], [2] notation.
Identify what the literature is NOT studying — where a student's original contribution can live. Categorizes gaps by type and marks confidence levels.
Parameters:
papers(array, required): 2–20 Paper objects.topic(string, required): The research topic.
Returns:
gaps[]: Each gap hasdescription,type(population/temporal/methodological/geographic/theoretical/other),confidence(high/medium/low), andrationale.originalContributionAngles[]: Specific suggestions for what a student could add.
Build a structured argument scaffold for a research paper. Maps available evidence to supporting arguments, anticipates counterarguments with rebuttals, and identifies missing evidence the student still needs.
Parameters:
topic(string): The research topic.thesis(string): The student's thesis statement.papers(array, required): 1–20 Paper objects to draw evidence from.
Returns:
argumentScaffold:thesisStatement,supportingArguments[](each withclaim,evidence[],paperIds[]),counterarguments[](each withclaim,rebuttal,paperIds[]),suggestedStructure[].missingEvidence[]: What the student still needs to find.
Format a paper in APA, MLA, or Chicago style using real metadata. Only formats papers returned from search_papers — never invents citations.
Parameters:
paper: A Paper object fromsearch_papers.style:"apa","mla", or"chicago".
Returns: Formatted citation string.
Example output (APA):
Twenge, J., Haidt, J. (2019). This is our chance to stop social media from harming teen girls' mental health. Semantic Scholar. https://www.semanticscholar.org/paper/abc123
All-in-one research assistant. Searches all databases, summarizes the top papers, maps the literature, and finds gaps — in a single tool call. Best for getting oriented on a new topic quickly.
Parameters:
query(string, required): Research topic or question.depth("quick"|"thorough", optional):"quick"searches 5 papers;"thorough"searches 10 with full gap analysis. Default:"quick".
Returns: Combined object with papers[], summaries[], literatureMap, gaps.
A literature review is a systematic survey of what has already been written about your topic. It does three things:
- Maps the terrain — what has been studied, by whom, with what methods
- Shows the state of knowledge — what is settled, what is debated
- Identifies the gap — what hasn't been studied yet, where your work fits
A strong literature review is not a list of summaries. It is a synthesis that builds a case for why your research question matters and hasn't been answered yet.
A research gap is a question the literature hasn't answered. Gaps come in several types:
- Population gaps: Studies have only examined one demographic group (e.g., only adults, only Americans)
- Temporal gaps: The phenomenon hasn't been studied recently, or long-term effects are unknown
- Methodological gaps: Most studies use surveys; no one has done qualitative interviews, or vice versa
- Geographic gaps: Findings from Western countries may not apply elsewhere
- Theoretical gaps: Existing frameworks don't explain a phenomenon well
- Interaction gaps: Two variables have been studied separately but never together
Use find_gaps to identify these automatically, then verify by checking whether the gap is actually present in the literature.
A research argument has four parts:
- Thesis: Your central claim (what you are arguing is true)
- Warrant: Why the claim matters (why anyone should care)
- Evidence: Papers that support your claim
- Counterargument: The strongest objection to your thesis, and your response
Use build_argument to generate a scaffold, then fill it in with your own analysis. The tool will tell you what evidence is missing so you know what to search for next.
search_and_summarize(
query="social media mental health adolescents depression anxiety",
depth="thorough"
)
This returns 10 papers, summarizes each, maps the literature, and identifies gaps. You now know the main debates (correlational vs. causal evidence, passive vs. active use) and can see which populations are understudied.
The gap analysis suggests that "effects on boys vs. girls" is understudied. Search specifically:
search_papers(
query="social media depression gender differences adolescents boys girls",
sources=["pubmed", "semantic_scholar"],
limit=10,
yearFrom=2015
)
For each paper with a substantial abstract:
summarize_paper(
paperId="pubmed:38293847",
title="...",
abstract="...",
topic="gender differences in social media effects on adolescent mental health"
)
map_literature(
papers=[...the papers from Step 2...],
topic="gender differences in social media and adolescent mental health"
)
build_argument(
topic="social media and adolescent mental health",
thesis="Passive consumption of Instagram harms girls' self-esteem more than boys' due to appearance-related social comparison, while active social use affects both genders equally",
papers=[...all collected papers...]
)
The output shows which papers support each sub-claim and which evidence is still missing (e.g., "experimental studies manipulating active vs. passive use by gender").
format_citation(paper={...}, style="apa")
- All papers are fetched from real academic APIs in real time
- Every paper includes a verifiable URL
- Gap confidence levels are explicit (high/medium/low)
- Analysis is labeled as AI interpretation, not ground truth
- Click the
urlof any paper you cite to verify it exists and says what the summary claims - For important arguments, read the full paper, not just the abstract
- Cross-check citations against official style guides before final submission
- Treat
confidence: "low"gaps as hypotheses to investigate, not established facts
This tool helps you find and analyze real literature faster. It does not write your paper for you, and it never should. The synthesis, argument, and original analysis must be yours. Using AI to fabricate sources is academic dishonesty; using AI to find and understand real sources is good research practice.
| Database | Coverage | Best for |
|---|---|---|
| ArXiv | CS, Math, Physics, Economics preprints | STEM, cutting-edge research |
| Semantic Scholar | Cross-disciplinary, 200M+ papers | Citation counts, open access |
| OpenAlex | Cross-disciplinary, 250M+ works | Social sciences, humanities |
| PubMed | Biomedical and life sciences | Medicine, clinical research |
All APIs are free and do not require authentication for basic usage. Semantic Scholar may rate-limit heavy usage; the server handles this gracefully by returning results from other sources.
# Install dependencies
npm install
# Run in development (no build step)
npm run dev
# Build
npm run build
# Run built version
npm startRequirements: Node.js 18+, ANTHROPIC_API_KEY environment variable.