FastMCP server for universal document format conversion. Convert PDFs, Word docs, HTML, Markdown, LaTeX, and more to any output format.
- Universal conversion - Convert between 20+ document formats
- PDF support - Uses pdf2docx for accurate PDF extraction, then pandoc
- OCR support - Extract text from scanned PDFs with layout preservation
- GROBID integration - Extract metadata and references from academic PDFs
- Batch processing - Convert entire directories with mixed formats
- Parallel processing - Subprocess isolation for true PDF parallelism
- Recursive mode - Process nested folder structures
- Format filtering - Convert only specific file types (e.g., just PDFs)
Claude Code
│
└── DocConvert MCP Server (this)
│
├── pdf2docx (PDF → DOCX extraction)
├── PyMuPDF (OCR with layout preservation)
├── GROBID (metadata + references extraction)
│
└── pandoc (all other conversions)
pdf, docx, odt, html, md, tex, latex, rst, epub, rtf, txt, org, mediawiki, textile, asciidoc
odt, docx, html, markdown, latex, pdf, epub, rst, asciidoc, rtf, txt, org, mediawiki
# Install pandoc
sudo apt install pandoc
# Install pdf2docx
pip install pdf2docxcd /path/to/pdf2odt-mcp
uv venv
source .venv/bin/activate
uv pip install fastmcp pdf2docxAdd to .mcp.json:
{
"mcpServers": {
"docconvert": {
"command": "/path/to/pdf2odt-mcp/.venv/bin/fastmcp",
"args": ["run", "/path/to/pdf2odt-mcp/pdf2odt_mcp_server.py"]
}
}
}Convert document(s) to a target format.
# Single file: PDF to Markdown
convert(
input="/path/to/document.pdf",
output="/path/to/document.md",
format="markdown"
)
# Single file: PDF to ODT
convert(
input="/path/to/paper.pdf",
output="/path/to/paper.odt",
format="odt"
)
# Directory: Convert all supported files to markdown
convert(
input="/path/to/docs/",
output="/path/to/output/",
format="markdown",
recursive=True
)
# Directory: Only convert PDFs to ODT
convert(
input="/path/to/mixed_docs/",
output="/path/to/odt_output/",
format="odt",
filter="pdf",
recursive=True
)Parameters:
| Parameter | Type | Description |
|---|---|---|
input |
string | Input file or directory path |
output |
string | Output file or directory path |
format |
string | Target format (odt, docx, markdown, html, etc.) |
filter |
string | Optional - only convert files with this extension |
recursive |
bool | If True, process subdirectories |
parallel |
int | Number of parallel workers (default 1 = sequential, max 16) |
overwrite |
bool | If True (default), overwrite existing files. If False, skip existing. |
Parallel Processing:
Parallel processing is available for all file types. PDF files use subprocess isolation to bypass pdf2docx's internal locking and achieve true parallelism. Requires adequate CPU and RAM.
# Parallel PDF conversion (4 workers)
convert(
input="/path/to/pdfs/",
output="/path/to/output/",
format="markdown",
filter="pdf",
recursive=True,
parallel=4 # Each PDF runs in isolated subprocess
)
# Response shows parallel PDF info:
# {"total": 10, "converted": 10, "pdf_files": 10, "pdf_parallel": true, "pdf_workers": 4}
# Convert markdown files in parallel (4 workers)
convert(
input="/path/to/markdown_docs/",
output="/path/to/output/",
format="html",
filter="md",
recursive=True,
parallel=4
)
# Mixed formats: all parallel
convert(
input="/path/to/mixed_docs/", # Contains PDFs and markdown
output="/path/to/output/",
format="html",
recursive=True,
parallel=4
)
# Response shows breakdown:
# {"total": 50, "converted": 50, "pdf_files": 10, "pdf_parallel": true, "pdf_workers": 4, "parallel_workers": 4, "parallel_files": 40}Note: Parallel PDF processing requires sufficient CPU and RAM. On resource-constrained systems, use parallel=1 (default) for reliable sequential processing.
Skip Existing Files:
Use overwrite=False to skip files that already exist (useful for resuming interrupted batches):
# Resume an interrupted batch - only convert files not yet done
convert(
input="/path/to/docs/",
output="/path/to/output/",
format="markdown",
recursive=True,
overwrite=False # Skip existing output files
)
# Response includes skipped count:
# {"total": 100, "converted": 25, "skipped": 75, "failed": 0}List all supported input and output formats.
formats()
# Returns:
# {
# "input_formats": ["asciidoc", "docx", "epub", "htm", "html", ...],
# "output_formats": ["asciidoc", "docx", "epub", "html", "latex", ...],
# "note": "PDF input uses pdf2docx then pandoc..."
# }List convertible files in a path, grouped by format.
list_convertible("/path/to/docs/", recursive=True)
# Returns:
# {
# "success": True,
# "count": 15,
# "by_format": {
# ".pdf": ["file1.pdf", "file2.pdf"],
# ".docx": ["report.docx"],
# ".md": ["notes.md", "readme.md"]
# }
# }Convert journal guidelines and example papers for easy reference:
# Convert journal author guidelines
convert(
input="/path/to/author_guidelines.pdf",
output="/path/to/guidelines.md",
format="markdown"
)
# Convert example papers to study structure
convert(
input="/path/to/example_paper.pdf",
output="/path/to/example.md",
format="markdown"
)Batch convert legacy documents:
# Convert all Word docs to ODT
convert(
input="/path/to/word_docs/",
output="/path/to/odt_docs/",
format="odt",
filter="docx",
recursive=True
)Extract text from PDFs for analysis:
# Convert PDFs to plain text
convert(
input="/path/to/pdfs/",
output="/path/to/text/",
format="txt",
filter="pdf"
)Convert documents to HTML:
# Markdown to HTML
convert(
input="/path/to/docs/",
output="/path/to/html/",
format="html",
filter="md",
recursive=True
)Works alongside other MCP servers:
- LibreOffice MCP - Edit converted documents
- Zotero MCP - Add citations to converted content
# 1. Convert journal guidelines
convert(input="guidelines.pdf", output="guidelines.md", format="markdown")
# 2. Read guidelines
Read("guidelines.md")
# 3. Create document in LibreOffice
document(action="create", doc_type="writer")
# 4. Write content following guidelines
text(action="insert", content="...")
# 5. Add citations from Zotero
search_zotero("topic")
text(action="insert", content="{ | (Author, 2020) | | |zu:LIB:KEY}")
# 6. Save
save(action="save", file_path="paper.odt")PDFs are converted in two stages:
- pdf2docx extracts content to DOCX (preserves layout, tables, images)
- pandoc converts DOCX to target format
This produces better results than direct PDF parsing for most documents.
All non-PDF conversions use pandoc directly, which handles:
- Document structure (headings, lists, tables)
- Basic formatting
- Cross-references
- Citations (in some formats)
PDF parallelism uses subprocess isolation: each PDF conversion runs in a completely separate Python process. This bypasses pdf2docx's internal locking that prevents thread-based parallelism.
- Each subprocess: ~100-200MB RAM + CPU usage
- 4 parallel workers: ~400-800MB RAM, 4 CPU cores
- 8 parallel workers: ~800MB-1.6GB RAM, 8 CPU cores
Standard OCR (ocr=True) - Preserves tables and layout (recommended):
# Best quality - preserves tables and structure
convert(
input="/path/to/scanned.pdf",
output="/path/to/output.md",
format="markdown",
ocr=True
)
# Batch OCR conversion
convert(
input="/path/to/scanned_docs/",
output="/path/to/output/",
format="markdown",
filter="pdf",
ocr=True,
recursive=True
)Fast OCR (ocr=True, ocr_fast=True) - Simpler, loses layout:
convert(
input="/path/to/scanned.pdf",
output="/path/to/output.txt",
format="txt",
ocr=True,
ocr_fast=True
)Standalone OCR - Create searchable PDF:
ocr_document("/path/to/scanned.pdf", "/path/to/searchable.pdf")OCR Requirements:
# Standard OCR (recommended)
pip install pymupdf4llm
# Fast OCR (optional)
pip install ocrmypdf
sudo apt install tesseract-ocrExtract structured metadata and references from scholarly documents using GROBID:
# Extract metadata (title, authors, abstract, DOI, etc.)
extract_metadata("/path/to/paper.pdf")
# Returns: {title, authors, abstract, keywords, date, doi, affiliations}
# Extract bibliography/references
extract_references("/path/to/paper.pdf")
# Returns: [{title, authors, year, journal, volume, pages, doi}, ...]
# Extract full document structure
extract_fulltext("/path/to/paper.pdf")
# Returns: metadata + all references
# Save as TEI XML
extract_fulltext("/path/to/paper.pdf", "/path/to/paper.tei.xml")GROBID Requirements:
# Start GROBID server (Docker)
docker run -d --name grobid -p 8070:8070 lfoppiano/grobid:0.8.0
# Install client
pip install grobid-client-python
# Optional: Set custom server URL
export GROBID_SERVER=http://your-server:8070- Complex PDF layouts may not convert perfectly
- Some formatting may be lost in conversion
- Parallel PDF processing requires adequate system resources
- OCR quality depends on scan quality and Tesseract's capabilities
- GROBID requires a running server (local or remote)
pdf2odt-mcp/
├── pdf2odt_mcp_server.py # MCP server
├── .venv/ # Virtual environment
├── .gitignore
└── README.md
MIT