Nom the web. Feed your agents.
NomFeed converts any URL, YouTube video, or file into clean, structured markdown β and keeps it in a searchable local library. One CLI command to save. One command to search. Optional LLM extraction turns content into structured intelligence.
nomfeed add https://example.com/article # URL β markdown
nomfeed add https://youtube.com/watch?v=xyz # YouTube transcript
nomfeed add ./report.pdf # File β markdown
nomfeed search "transformers" # Full-text search
nomfeed read AMxTn2tS5_ --extract # Read LLM extractionNo database. No Docker. No daemon. Just .md files in ~/.nomfeed/.
NomFeed sits in the same space as markdown.new and keep.md β tools that convert web content to markdown for AI consumption. We love what they've built. NomFeed takes a different approach:
| markdown.new | keep.md | NomFeed | |
|---|---|---|---|
| What it does | URL β markdown (single conversion) | Bookmark + markdown API | CLI library + extraction engine |
| Storage | None (stateless) | Cloud | Local flat files (~/.nomfeed/) |
| YouTube | β | β | β Full transcript via yt-dlp |
| File conversion | β 20+ formats | β | β PDF, DOCX, XLSX, code, images |
| LLM extraction | β | β | β 6 patterns (wisdom, claims, chapters...) |
| Search | β | β API-based | β Local full-text search |
| MCP server | β | β | β 7 tools |
| Chrome extension | β | β | β Popup + context menu |
| Offline | β | β | β Everything local after save |
| Data ownership | N/A | Their servers | Your filesystem |
| Cost | Free | Free tier + paid | Free forever (BYO LLM key) |
| URL strategies | Cloudflare only | Unknown | 3-strategy cascade |
markdown.new is perfect when you need a quick one-off conversion. keep.md is great if you want a hosted API. NomFeed is for developers who want a local library they own, with search, extraction, and agent integration β all on their own machine.
git clone https://github.com/ameno-/nomfeed
cd nomfeed
bun install
bun link
# Optional: file conversion (PDF, DOCX, XLSX, images)
pip install 'markitdown[all]'
# Optional: YouTube transcript extraction
brew install yt-dlp
# Optional: LLM extraction (--extract flag)
export OPENROUTER_API_KEY=your-key # get one at openrouter.ai/keys# Save content
nomfeed add <url> # URL β markdown
nomfeed add <url> --extract # URL β markdown + LLM extraction
nomfeed add <file> # File β markdown
nomfeed note "some text" # Quick note
# Retrieve
nomfeed list # List everything (β¦ = has extraction)
nomfeed read <id> # Print markdown content
nomfeed read <id> --extract # Print extraction only
nomfeed read <id> --full # Print both
nomfeed search "query" # Full-text search
# Extract
nomfeed extract <id> # Run extraction on existing item
nomfeed extract <id> --patterns summarize,analyze_claims
nomfeed patterns # List available patterns
# Manage
nomfeed delete <id>
nomfeed statusEvery command supports --json for machine-readable output.
NomFeed uses a 3-strategy cascade β the first success wins:
- Cloudflare
text/markdownβ native edge conversion, fastest - Jina Reader β renders JavaScript SPAs, returns clean markdown
- Readability β universal fallback, extracts article content from raw HTML
This means NomFeed works on sites that break simpler tools: JS-heavy SPAs, paywalled articles (where possible), and pages that return empty HTML without rendering.
When you add with --extract, NomFeed runs Fabric-inspired patterns against the content via LLM. Each pattern is a single focused API call β no agents, no orchestration, just reliable structured output.
| Pattern | What It Extracts |
|---|---|
extract_wisdom |
Ideas, insights, quotes, habits, facts, references, recommendations |
video_chapters |
Timestamped chapter outline |
analyze_claims |
Truth claims with evidence ratings (AβF) and logical fallacies |
extract_references |
Books, papers, tools, people mentioned |
summarize |
One-sentence summary + main points + takeaways |
rate_content |
Quality scores: surprise, novelty, insight, value, wisdom (0β10) |
# Default patterns: extract_wisdom + video_chapters
nomfeed add https://youtube.com/watch?v=xyz --extract
# Choose specific patterns
nomfeed extract <id> --patterns extract_wisdom,analyze_claims,rate_content
# Add your own
# Create ~/.nomfeed/patterns/<name>/system.md with your prompt
nomfeed patterns # will show custom patternsProvider: OpenRouter β one API key, access to all models. Default model chain: Claude Sonnet 4.5 β Sonnet 4 β Haiku 4.5 (automatic fallback on rate limits/errors).
nomfeed mcp # starts stdio MCP serverTools: nomfeed_add, nomfeed_list, nomfeed_read (content/extract/full), nomfeed_search, nomfeed_extract, nomfeed_delete, nomfeed_status
{
"mcpServers": {
"nomfeed": {
"command": "bun",
"args": ["run", "/path/to/nomfeed/src/cli.ts", "mcp"]
}
}
}- Run
nomfeed serve(starts local HTTP server on port 24242) chrome://extensionsβ Developer Mode β Load unpacked β selectextension/- Click the NomFeed icon or right-click any page β "Save to NomFeed"
The extension supports tagging and optional extraction on save.
nomfeed serve # default port 24242
nomfeed serve --port 8080| Method | Path | Description |
|---|---|---|
POST |
/add |
Save URL, file, or note. { url, title?, tags?, extract? } |
GET |
/items |
List items. ?query=&tag=&type=&limit= |
GET |
/items/:id |
Get item metadata + content |
DELETE |
/items/:id |
Delete item |
GET |
/search?q= |
Full-text search |
GET |
/health |
Health check |
Everything lives in ~/.nomfeed/ (override with NOMFEED_DIR):
~/.nomfeed/
βββ items.json # metadata index (array of items)
βββ content/
βββ abc123.md # converted markdown
βββ abc123.extraction.md # LLM extraction (if extracted)
βββ def456.md
Each .md file includes YAML frontmatter with source, title, and timestamp. Files are self-contained β you can copy them anywhere and they still make sense.
| Variable | Description | Required |
|---|---|---|
OPENROUTER_API_KEY |
LLM access for --extract |
Only for extraction |
NOMFEED_MODEL |
Override default model | No |
NOMFEED_DIR |
Override data directory | No (default: ~/.nomfeed) |
NomFeed ships with a Pi / Agent Skills compatible skill in skills/nomfeed/. Install it so your coding agent can save, search, and extract content on your behalf.
# Copy to your global skills directory
cp -r skills/nomfeed ~/.pi/agent/skills/nomfeed
# Or symlink it (stays in sync with repo updates)
ln -sf "$(pwd)/skills/nomfeed" ~/.pi/agent/skills/nomfeedThe skill is also compatible with other harnesses that support the Agent Skills standard:
# Claude Code
cp -r skills/nomfeed ~/.claude/skills/nomfeed
# Codex
cp -r skills/nomfeed ~/.codex/skills/nomfeedOnce installed, your agent will automatically trigger NomFeed on phrases like "save this URL", "bookmark this", "search my saved content", "extract insights from this video", etc.
See DESIGN.md for the full system architecture, conversion strategies, extraction pipeline, and design decisions.