Skip to content

Fisher521/reading-routine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

reading-routine

Turn your 12,000-item reading backlog into a daily triage digest — for Claude Code Routines, local cron, or manual review.

npm version License: MIT X: @hawking520


The problem

You have too many saved articles. Every morning you open the reading app, get overwhelmed, close it. Six months in, 90% of your backlog you'll never touch.

The problem isn't storage — you already have Raindrop, Readwise, Burn, or a Pocket CSV rotting in ~/Downloads. The problem is triage: which 5 of the 12,000 are worth your next 30 minutes?

The fix

reading-routine is a backend-agnostic CLI that:

  1. Eats your backlog (JSON in — from any source)
  2. Scores each item (freshness × depth × tag signal)
  3. Spits out a ranked markdown digest (or the Claude Code Routine prompt that wraps all of it)

It doesn't know or care where your reading list lives. It does one thing: rank.

Install

npm install -g reading-routine
# or run without installing:
npx reading-routine help

Quick start

# 1. Try with the sample backlog
curl -s https://raw.githubusercontent.com/Fisher521/reading-routine/main/examples/backlog.sample.json \
  | reading-routine digest --top 5

# 2. Print the Claude Code Routine prompt (paste into claude.ai/code/routines)
reading-routine routine-prompt

# 3. Or run locally on your own backlog JSON
reading-routine digest --input ~/my-backlog.json --top 10

Three ways to run it

1. As a Claude Code Routine (cloud — hands off)

Claude Code Routines (announced April 15, 2026) let Claude run your workflows on a schedule, even with your laptop closed. reading-routine ships a ready prompt:

reading-routine routine-prompt | pbcopy

Then open claude.ai/code/routines, paste, set schedule (e.g. daily 9 AM), pick the MCP connectors (Burn / Raindrop / Readwise) and the delivery channel (Slack / Email).

Morning arrives → digest in your inbox. You didn't open your laptop.

2. As a local cron (laptop open — private)

# crontab -e
0 9 * * * /usr/local/bin/reading-routine digest \
  --input ~/Library/reading-routine/backlog.json --top 10 \
  | tee ~/Library/reading-routine/today.md \
  | /usr/local/bin/terminal-notifier -title "Today's reading"

You handle the data pull (any script that outputs JSON); reading-routine handles the ranking.

3. As a manual one-off

# Paste your backlog URL list into a file → parse to JSON → digest.
reading-routine digest --input ~/Downloads/backlog.json

Backlog format

JSON array of objects:

[
  {
    "url": "https://example.com/article",
    "title": "Why X matters",
    "saved_at": "2026-04-10T14:00:00Z",
    "tags": ["rust", "performance"],
    "word_count": 2800,
    "snippet": "optional preview text"
  }
]

Required: url, title. Everything else improves ranking but is optional.

Adapters (bring your own)

Because reading-routine just wants JSON, any source works. Examples:

From Burn (MCP or CLI):

burn-mcp export-queue --status unread --limit 200 | reading-routine digest

From Raindrop.io:

curl -sH "Authorization: Bearer $RAINDROP_TOKEN" \
  "https://api.raindrop.io/rest/v1/raindrops/0?perpage=100" \
  | jq '[.items[] | {url: .link, title, saved_at: .created, tags, word_count: .word_count}]' \
  | reading-routine digest

From Readwise Reader:

curl -sH "Authorization: Token $READWISE_TOKEN" \
  "https://readwise.io/api/v3/list/?location=new" \
  | jq '[.results[] | {url: .source_url, title, saved_at: .created_at, tags: (.tags // [] | keys), word_count}]' \
  | reading-routine digest

From a Pocket CSV (for folks who exported before the Nov 2025 deadline):

# One-liner csv → json conversion — tags are pipe-separated in Pocket, hence the split
node -e "const csv=require('fs').readFileSync('pocket.csv','utf8');const lines=csv.split('\n').slice(1);console.log(JSON.stringify(lines.filter(Boolean).map(l=>{const[title,url,ts,cursor,tags]=l.split(',');return{title:title.replace(/\"/g,''),url,saved_at:new Date(Number(ts)*1000).toISOString(),tags:(tags||'').split('|').filter(Boolean)}})))" \
  | reading-routine digest

How the score works

freshness × 0.6 + depth × 0.3 + tag_bonus
  • Freshness: peaks around day 3 (out of the hot-take zone, still relevant), decays over ~60 days
  • Depth: 2k–6k-word pieces score highest (long-form payoff zone)
  • Tag bonus: +0.1 if curated with tags (signal of intent, not drive-by save)

Simple on purpose. The score is a first pass — if you plug it into the Routine prompt, Claude re-ranks against your current priorities in Step 3.

Part of the Burn ecosystem

Tool What it does
burn-mcp-server MCP server (26 tools) for Burn reading triage
burn451 Terminal CLI for your reading queue
morning-brief Daily briefing CLI
reading-digest Weekly digest from your bookmarks
reading-routine (this) Backend-agnostic daily triage CLI

Roadmap

  • v0.1 — Core CLI, JSON in, markdown out, Claude Routine prompt generator
  • v0.2 — Bundled source adapters (reading-routine pull burn, reading-routine pull raindrop)
  • v0.3 — Config file (~/.reading-routine.yaml) with priorities so the score learns your focus
  • v0.4 — Built-in delivery: Telegram / email / Slack webhooks

Why this exists

Routines just landed (HN #5, 700 pts on April 15, 2026). The reading-triage use case is obvious — everyone has a backlog, nobody wants another web UI. What's missing is a boring CLI that does ranking well, so the Routine prompt stays short and composable.

Building publicly at @hawking520. Feedback, PRs, adapters for other sources — all welcome.

Contributing

PRs welcome. Most useful contributions right now: adapter examples for sources I don't have (Instapaper, Wallabag, Matter, Obsidian Web Clipper).

About

Built by @hawking520 — exploring AI-era content & attention workflows in public.

Pairs naturally with Burn — the 24h-countdown reading queue that forces you to absorb, not hoard.

License

MIT — see LICENSE

Releases

No releases published

Packages

 
 
 

Contributors