A privacy-first RSS reader with AI-powered summaries, inspired by Hacker News design. Built with React, TypeScript, and Ollama integration for local, private AI processing.
- Privacy & Security
- Features
- Getting Started
- Docker Deployment
- Usage
- Technology Stack
- Privacy Features
- Ollama Setup & Model Guide
- Production Deployment
- Feed Loading & Proxy Configuration
- Project Structure
- Contributing
This RSS reader is designed with privacy as the top priority:
- No Data Collection: No analytics, tracking, or data collection
- Local Storage Only: All your feeds, prompts, and settings are stored locally in your browser
- Anonymous AI Requests: AI summaries use anonymous requests with no user identification
- Self-Hosted: Runs entirely on your own infrastructure, ensuring no data leaves your control.
- Open Source: Full source code available for transparency and audit
- No Third-Party Tracking: No Google Analytics, Facebook pixels, or other tracking scripts
- 📰 RSS Feed Parsing - Read RSS feeds with clean, Hacker News-style interface
- 🤖 AI Summaries - Generate custom AI summaries using local Ollama models (100% private, no API keys!)
- ⚙️ Customizable Prompts - Create and manage custom AI summary prompts
- 🎨 Hacker News UI - Clean, minimalist design inspired by Hacker News
- 📱 Responsive Design - Works on desktop, tablet, and mobile devices
- ⚡ Real-time Updates - Refresh feeds and generate summaries on demand
- Node.js 18+
- npm or yarn
- Ollama installed and running locally (Download Ollama)
- Clone the repository:
git clone <your-repo-url>
cd rss-ai-reader- Install dependencies:
npm install- Set up environment variables:
cp env.example .env- Install and start Ollama:
# Install Ollama from https://ollama.ai
# Then pull a model (recommended: phi3:mini for small PCs)
ollama pull phi3:mini- Configure Ollama in
.env(optional, defaults are fine):
VITE_OLLAMA_API_URL="http://localhost:11434"
VITE_OLLAMA_MODEL="phi3:mini"
- Start the development server:
npm run dev- Open your browser and navigate to
http://localhost:3000
The app starts by loading the Hacker News RSS feed. You can:
- Click on any article to view details
- Use the refresh button to reload the feed
- View article summaries and full content
- Select an article from the feed
- Choose a summary prompt from the available options
- Click "Generate AI Summary" to create a custom summary
- Add your own custom prompts using the "Add Custom" button
Create custom AI summary prompts for different use cases:
- Technical summaries for developers
- Business summaries for executives
- Casual summaries for general readers
- Or any other specific format you prefer
- Frontend: React 18 + TypeScript
- Build Tool: Vite
- Styling: Tailwind CSS
- RSS Parsing: fast-xml-parser
- AI Integration: Ollama (local, privacy-focused)
- Icons: Lucide React
This app implements comprehensive privacy features to protect your data and browsing habits.
- Location:
src/utils/htmlSanitizer.ts - Automatically detects and removes tracking pixels from RSS feed content
- Removes 1x1 images, very small images (≤3x3), and images from common tracking domains
- Domains blocked: tracking, analytics, doubleclick, googlesyndication, facebook.com/tr, beacon
- Location:
src/utils/privacy.ts→stripTrackingParams() - Automatically removes tracking parameters from all URLs in feeds and links
- Parameters stripped:
utm_source,utm_medium,utm_campaign,utm_term,utm_content,fbclid,gclid,ref,referrer,source,campaign_id,affiliate_id,_ga,_gid,mc_cid,mc_eid, and many more - Applied to: All RSS feed item links automatically
- All external links include:
rel="noopener noreferrer"- Prevents window.opener security issuesreferrerPolicy="no-referrer"- Prevents referrer leakagetarget="_blank"- Opens in new tab
- Location:
index.html- Meta tag:<meta name="referrer" content="no-referrer" />nginx.conf- HTTP header:add_header Referrer-Policy "no-referrer" always;
- Both client-side meta tag and server-side HTTP header ensure no referrer is sent
- Status: ✅ Fully implemented and automatically applied
- Location:
src/utils/privacy.ts→convertYouTubeToNoCookie() - Automatically converts all YouTube URLs (
youtube.com,youtu.be, embed URLs) toyoutube-nocookie.com - Handles all YouTube URL formats: watch URLs, short links, embed URLs
- Privacy Benefit: YouTube nocookie domain doesn't set cookies unless user interacts, preventing tracking
- Location:
src/utils/htmlSanitizer.ts - Removes
<script>tags completely - Removes dangerous elements:
object,embed,iframe,form,input,button - Removes event handlers:
onclick,onload,onerror,onmouseover - Security: Prevents XSS attacks and third-party tracking scripts
- ✅ Removes pixel trackers
- ✅ Strips tracking parameters from URLs
⚠️ Retrieves original links when feeds are sourced from FeedBurner (needs server-side)- ✅ Opens external links with
rel="noopener noreferrer"andreferrerPolicy="no-referrer" - ✅ Implements
Referrer-Policy: no-referrer(both meta tag and HTTP header) ⚠️ Provides a media proxy (not implemented - requires server-side)- ✅ Plays YouTube videos via youtube-nocookie.com (automatically applied to all YouTube links)
- ✅ Supports alternative YouTube video players such as Invidious (utility available, configurable)
- ✅ Blocks external JavaScript to prevent tracking and enhance security
- FeedBurner Unwrapping: Add server-side redirect following to
server.js - Media Proxy: Add a media proxy endpoint to
server.jsor nginx config - Invidious Support: Add a user preference to choose YouTube player (nocookie vs Invidious)
- Install Ollama: https://ollama.ai
- Pull your model:
ollama pull phi3:mini
- Start Ollama (usually runs automatically):
ollama serve
- Configure in
.env:VITE_OLLAMA_API_URL="http://localhost:11434" VITE_OLLAMA_MODEL="phi3:mini"
1. Phi-3 Mini ⭐ BEST FOR LOW-END PCs
ollama pull phi3:mini- Size: ~2.3GB
- RAM needed: 4GB+ system RAM
- Speed: ⚡⚡⚡⚡ Very fast
- Quality: ⭐⭐⭐ Good for summaries
2. TinyLlama ⭐ SMALLEST OPTION
ollama pull tinyllama- Size: ~637MB!
- RAM needed: 2GB+ system RAM
- Speed: ⚡⚡⚡⚡⚡ Extremely fast
- Quality: ⭐⭐ Basic but usable
3. Gemma 2:2B ⭐ BALANCED SMALL MODEL
ollama pull gemma2:2b- Size: ~1.4GB
- RAM needed: 3GB+ system RAM
- Speed: ⚡⚡⚡⚡ Very fast
- Quality: ⭐⭐⭐ Good quality
1. Mistral ⭐ BEST OVERALL
ollama pull mistral- Excellent at following instructions (perfect for "20 words or less")
- Fast inference speed
- Good balance of quality and speed
- Memory efficient (~4GB)
2. Llama 2 ⭐ MOST RELIABLE
ollama pull llama2
ollama pull llama2:13b # For better quality- Very reliable and consistent
- Good instruction following
- Multiple sizes available (7b, 13b, 70b)
3. Llama 3 ⭐ NEWEST & FAST
ollama pull llama3- Latest and most capable
- Very fast inference
- Excellent instruction following
- Better at handling long contexts
| Model | Size | RAM Needed | Speed | Quality | Best For |
|---|---|---|---|---|---|
| tinyllama | 637MB | 2GB+ | ⚡⚡⚡⚡⚡ | ⭐⭐ | Very old PCs |
| phi3:mini | 2.3GB | 4GB+ | ⚡⚡⚡⚡ | ⭐⭐⭐ | Recommended! |
| gemma2:2b | 1.4GB | 3GB+ | ⚡⚡⚡⚡ | ⭐⭐⭐ | Balanced option |
| mistral | 4GB | 4GB+ | ⚡⚡⚡ | ⭐⭐⭐⭐ | Best overall |
| llama2 | 4GB | 4GB+ | ⚡⚡ | ⭐⭐⭐⭐ | Most reliable |
| llama3 | 5GB | 5GB+ | ⚡⚡⚡ | ⭐⭐⭐⭐⭐ | Latest & best |
1. Ollama Not Running
# Check if Ollama is running
ollama serve
# Or test the API
curl http://localhost:11434/api/tags2. Model Not Found
# List available models
ollama list
# Pull the model you need
ollama pull phi3:mini3. Empty Response
- Model is still loading (first use)
- Restart Ollama:
pkill ollama && ollama serve - Verify model works:
ollama run phi3:mini "Summarize this in 10 words: AI is changing everything"
4. Content Too Short
- Article content extracted is less than 50 characters
- Try articles with more content
- Some RSS feeds only provide titles, not full content
- For Speed & Quality Balance:
mistral - For Best Quality:
llama2:13borllama3:8b - For Limited Memory:
phi3:mini - For Technical Articles:
codellama
- Install Dependencies
npm install- Build Frontend
npm run build- Start Proxy Server
npm run proxy- Start Frontend (Production)
npm run start# Ollama Configuration
VITE_OLLAMA_API_URL="http://localhost:11434"
VITE_OLLAMA_MODEL="phi3:mini"# Server Configuration
PORT=3001
NODE_ENV=production
# Performance Tuning
MAX_RESPONSE_SIZE=10485760 # 10MB
REQUEST_TIMEOUT=10000 # 10 seconds
# Rate Limiting
RATE_LIMIT_WINDOW=60000 # 1 minute
RATE_LIMIT_MAX_REQUESTS=100 # 100 requests per window
# CORS
CORS_ORIGIN=https://yourdomain.com # Use '*' for development only
# Trusted Proxy IPs
TRUSTED_PROXY=127.0.0.1,::1
# Health Check
HEALTH_CHECK_FEED=https://rss.cnn.com/rss/edition.rss
# Logging
LOG_LEVEL=INFO # DEBUG, INFO, WARN, ERROR- Docker and Docker Compose installed
- Ollama installed and running on your host PC (not in Docker)
- At least one Ollama model pulled (e.g.,
ollama pull gemma3:1b)
-
Clone the repository:
git clone <your-repo-url> cd rss-ai-reader
-
Set up environment variables:
cp env.example .env # Edit .env if needed (defaults work for most setups) -
Ensure Ollama is running on your host:
# Check if Ollama is running curl http://localhost:11434/api/tags # If not running, start it ollama serve
-
Pull your preferred model (if not already done):
ollama pull gemma3:1b # or phi3:mini, mistral, etc. -
Build and start containers:
docker-compose up -d --build
-
Access the application:
- Frontend: http://localhost:3000
- Proxy health: http://localhost:3001/health
The docker-compose.yml includes two services:
- Frontend: React app served by nginx (port 3000)
- Proxy: Express.js CORS proxy for RSS feeds (port 3001)
Key variables for Docker deployment:
# AI Configuration
VITE_AI_PROVIDER=ollama
VITE_OLLAMA_API_URL=http://localhost:11434 # Ollama on host PC
VITE_OLLAMA_MODEL=gemma3:1b # Your preferred model
# Proxy Configuration
VITE_PROXY_URL=http://localhost:3001/api/proxy
# Docker Ports
FRONTEND_PORT=3000
PROXY_PORT=3001
# Proxy Server Settings
CORS_ORIGIN=*
LOG_LEVEL=INFO- Ollama runs on host: The frontend (running in your browser) connects directly to Ollama on
localhost:11434 - VITE_ variables are build-time*: Changes require rebuilding the frontend container
- Proxy runs in Docker: Handles RSS feed requests to avoid CORS issues
# Start services
docker-compose up -d
# View logs
docker-compose logs -f
# View logs for specific service
docker-compose logs -f frontend
docker-compose logs -f proxy
# Stop services
docker-compose down
# Rebuild after .env changes
docker-compose build frontend
docker-compose up -d
# Rebuild everything
docker-compose up -d --build
# Check container status
docker-compose ps
# View resource usage
docker statsYou can also build and run containers individually:
# Build frontend
docker build -f Dockerfile.frontend -t rss-ai-reader-frontend .
# Run frontend
docker run -d \
-p 3000:80 \
--name rss-ai-frontend \
rss-ai-reader-frontend# Build proxy
docker build -f Dockerfile.proxy -t rss-ai-reader-proxy .
# Run proxy
docker run -d \
-p 3001:3001 \
--name rss-ai-proxy \
-e CORS_ORIGIN=* \
rss-ai-reader-proxy-
Verify Ollama is running:
curl http://localhost:11434/api/tags
-
Check model exists:
ollama list
-
Verify
.envconfiguration:cat .env | grep VITE_OLLAMA -
Rebuild frontend (VITE_* vars are build-time):
docker-compose build frontend docker-compose up -d
If you see CSP errors in browser console:
- The
nginx.confincludes proxy URLs in CSP - Rebuild frontend after any CSP changes:
docker-compose build frontend
-
Check proxy is running:
curl http://localhost:3001/health
-
Check proxy logs:
docker-compose logs proxy
-
Verify proxy port mapping:
docker-compose ps
-
Ensure model is pulled:
ollama pull gemma3:1b # or your model name -
Verify model name matches
.env:# Check .env grep VITE_OLLAMA_MODEL .env # Check available models ollama list
-
Rebuild frontend (model name is embedded at build time):
docker-compose build frontend docker-compose up -d
For production deployments:
-
Set specific CORS origin (not
*):CORS_ORIGIN=https://yourdomain.com
-
Use HTTPS with reverse proxy (Traefik, nginx, etc.)
-
Configure Traefik labels in
docker-compose.ymlfor your domain -
Set appropriate rate limits based on expected traffic
-
Monitor container health:
docker-compose ps docker stats
┌─────────────────────────────────────────┐
│ Host PC │
│ ┌──────────────┐ ┌─────────────────┐ │
│ │ Browser │ │ Ollama │ │
│ │ (Frontend) │──│ (Port 11434) │ │
│ └──────────────┘ └─────────────────┘ │
│ │ │
│ │ HTTP │
│ ▼ │
│ ┌─────────────────────────────────┐ │
│ │ Docker Containers │ │
│ │ ┌──────────┐ ┌──────────────┐ │ │
│ │ │Frontend │ │ Proxy │ │ │
│ │ │(nginx) │ │ (Express) │ │ │
│ │ │:3000 │ │ :3001 │ │ │
│ │ └──────────┘ └──────────────┘ │ │
│ └─────────────────────────────────┘ │
└─────────────────────────────────────────┘
Key Points:
- Frontend runs in browser (on host) → connects to Ollama (on host)
- Proxy runs in Docker → handles RSS feed requests
- All services communicate via exposed ports
apiVersion: apps/v1
kind: Deployment
metadata:
name: rss-ai-reader
spec:
replicas: 2
selector:
matchLabels:
app: rss-ai-reader
template:
metadata:
labels:
app: rss-ai-reader
spec:
containers:
- name: frontend
image: rss-ai-reader:latest
ports:
- containerPort: 3000
env:
- name: VITE_OLLAMA_API_URL
value: "http://localhost:11434"
- name: PORT
value: "3000"- Rate Limiting: Default 100 requests/minute per IP
- SSRF Protection: Private IP ranges blocked, internal hostnames blocked
- Response Size Limits: Default 10MB to prevent memory exhaustion
- CORS Configuration: Set specific domain (not '*') in production
- Error Handling: Errors sanitized (no internal details leaked)
- Environment variables validated and configured
- HTTPS enabled (reverse proxy with TLS)
- CORS origin set to specific domain (not '*')
- Trusted proxy IPs configured
- Rate limiting configured appropriately
- Security headers set (HSTS, CSP, etc.)
- Health check endpoint accessible and tested
- Graceful shutdown tested
- Memory limits configured
- Request timeout configured
- Error tracking configured
- Logging configured (LOG_LEVEL=INFO or WARN for production)
- Structured logs being collected
- Monitoring set up
- Alerting configured
- Rate Limiting Memory Leak: TTL-based cleanup with periodic backup cleanup
- Secure Request ID Generation: Using
crypto.randomUUID()instead ofMath.random() - IP Spoofing Protection: Validates trusted proxies, uses rightmost entry in chain
- CORS Credentials Issue: Only enable credentials when origin is not wildcard
- Health Check Improvements: Tests actual RSS feed with proper validation
- Environment Variable Validation: Validates all env vars with proper defaults
- Graceful Shutdown: Tracks connections, waits for completion, max 10s timeout
- Memory Monitoring: Warns when memory > 80% threshold
src/
├── components/ # React components
│ ├── Header.tsx # App header with refresh
│ ├── FeedList.tsx # RSS feed list display
│ ├── ArticleDetail.tsx # Article detail view
│ └── PromptSelector.tsx # AI prompt management
├── services/ # API services
│ ├── rssService.ts # RSS feed parsing
│ └── aiService.ts # Ollama AI integration
├── types/ # TypeScript type definitions
│ └── index.ts
├── App.tsx # Main app component
├── main.tsx # App entry point
└── index.css # Global styles
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request