Skip to content

kaskol10/rss-ai-reader

Repository files navigation

RSS AI Reader

A privacy-first RSS reader with AI-powered summaries, inspired by Hacker News design. Built with React, TypeScript, and Ollama integration for local, private AI processing.

Table of Contents

🔒 Privacy & Security

This RSS reader is designed with privacy as the top priority:

  • No Data Collection: No analytics, tracking, or data collection
  • Local Storage Only: All your feeds, prompts, and settings are stored locally in your browser
  • Anonymous AI Requests: AI summaries use anonymous requests with no user identification
  • Self-Hosted: Runs entirely on your own infrastructure, ensuring no data leaves your control.
  • Open Source: Full source code available for transparency and audit
  • No Third-Party Tracking: No Google Analytics, Facebook pixels, or other tracking scripts

Features

  • 📰 RSS Feed Parsing - Read RSS feeds with clean, Hacker News-style interface
  • 🤖 AI Summaries - Generate custom AI summaries using local Ollama models (100% private, no API keys!)
  • ⚙️ Customizable Prompts - Create and manage custom AI summary prompts
  • 🎨 Hacker News UI - Clean, minimalist design inspired by Hacker News
  • 📱 Responsive Design - Works on desktop, tablet, and mobile devices
  • Real-time Updates - Refresh feeds and generate summaries on demand

Getting Started

Prerequisites

  • Node.js 18+
  • npm or yarn
  • Ollama installed and running locally (Download Ollama)

Installation

  1. Clone the repository:
git clone <your-repo-url>
cd rss-ai-reader
  1. Install dependencies:
npm install
  1. Set up environment variables:
cp env.example .env
  1. Install and start Ollama:
# Install Ollama from https://ollama.ai
# Then pull a model (recommended: phi3:mini for small PCs)
ollama pull phi3:mini
  1. Configure Ollama in .env (optional, defaults are fine):
VITE_OLLAMA_API_URL="http://localhost:11434"
VITE_OLLAMA_MODEL="phi3:mini"
  1. Start the development server:
npm run dev
  1. Open your browser and navigate to http://localhost:3000

Usage

Reading RSS Feeds

The app starts by loading the Hacker News RSS feed. You can:

  • Click on any article to view details
  • Use the refresh button to reload the feed
  • View article summaries and full content

AI Summaries

  1. Select an article from the feed
  2. Choose a summary prompt from the available options
  3. Click "Generate AI Summary" to create a custom summary
  4. Add your own custom prompts using the "Add Custom" button

Custom Prompts

Create custom AI summary prompts for different use cases:

  • Technical summaries for developers
  • Business summaries for executives
  • Casual summaries for general readers
  • Or any other specific format you prefer

Technology Stack

  • Frontend: React 18 + TypeScript
  • Build Tool: Vite
  • Styling: Tailwind CSS
  • RSS Parsing: fast-xml-parser
  • AI Integration: Ollama (local, privacy-focused)
  • Icons: Lucide React

Privacy Features

This app implements comprehensive privacy features to protect your data and browsing habits.

✅ Implemented Features

1. Removes Pixel Trackers

  • Location: src/utils/htmlSanitizer.ts
  • Automatically detects and removes tracking pixels from RSS feed content
  • Removes 1x1 images, very small images (≤3x3), and images from common tracking domains
  • Domains blocked: tracking, analytics, doubleclick, googlesyndication, facebook.com/tr, beacon

2. Strips Tracking Parameters from URLs

  • Location: src/utils/privacy.tsstripTrackingParams()
  • Automatically removes tracking parameters from all URLs in feeds and links
  • Parameters stripped: utm_source, utm_medium, utm_campaign, utm_term, utm_content, fbclid, gclid, ref, referrer, source, campaign_id, affiliate_id, _ga, _gid, mc_cid, mc_eid, and many more
  • Applied to: All RSS feed item links automatically

3. External Links Privacy Attributes

  • All external links include:
    • rel="noopener noreferrer" - Prevents window.opener security issues
    • referrerPolicy="no-referrer" - Prevents referrer leakage
    • target="_blank" - Opens in new tab

4. HTTP Header Referrer-Policy

  • Location:
    • index.html - Meta tag: <meta name="referrer" content="no-referrer" />
    • nginx.conf - HTTP header: add_header Referrer-Policy "no-referrer" always;
  • Both client-side meta tag and server-side HTTP header ensure no referrer is sent

5. YouTube nocookie.com

  • Status: ✅ Fully implemented and automatically applied
  • Location: src/utils/privacy.tsconvertYouTubeToNoCookie()
  • Automatically converts all YouTube URLs (youtube.com, youtu.be, embed URLs) to youtube-nocookie.com
  • Handles all YouTube URL formats: watch URLs, short links, embed URLs
  • Privacy Benefit: YouTube nocookie domain doesn't set cookies unless user interacts, preventing tracking

6. Blocks External JavaScript

  • Location: src/utils/htmlSanitizer.ts
  • Removes <script> tags completely
  • Removes dangerous elements: object, embed, iframe, form, input, button
  • Removes event handlers: onclick, onload, onerror, onmouseover
  • Security: Prevents XSS attacks and third-party tracking scripts

📋 Privacy Checklist

  • ✅ Removes pixel trackers
  • ✅ Strips tracking parameters from URLs
  • ⚠️ Retrieves original links when feeds are sourced from FeedBurner (needs server-side)
  • ✅ Opens external links with rel="noopener noreferrer" and referrerPolicy="no-referrer"
  • ✅ Implements Referrer-Policy: no-referrer (both meta tag and HTTP header)
  • ⚠️ Provides a media proxy (not implemented - requires server-side)
  • ✅ Plays YouTube videos via youtube-nocookie.com (automatically applied to all YouTube links)
  • ✅ Supports alternative YouTube video players such as Invidious (utility available, configurable)
  • ✅ Blocks external JavaScript to prevent tracking and enhance security

🔧 Optional Enhancements

  1. FeedBurner Unwrapping: Add server-side redirect following to server.js
  2. Media Proxy: Add a media proxy endpoint to server.js or nginx config
  3. Invidious Support: Add a user preference to choose YouTube player (nocookie vs Invidious)

Ollama Setup & Model Guide

Quick Start

  1. Install Ollama: https://ollama.ai
  2. Pull your model:
    ollama pull phi3:mini
  3. Start Ollama (usually runs automatically):
    ollama serve
  4. Configure in .env:
    VITE_OLLAMA_API_URL="http://localhost:11434"
    VITE_OLLAMA_MODEL="phi3:mini"
    

Small Models for Any PC

🏆 Top 3 Small Models (Runs on Any PC)

1. Phi-3 Mini ⭐ BEST FOR LOW-END PCs

ollama pull phi3:mini
  • Size: ~2.3GB
  • RAM needed: 4GB+ system RAM
  • Speed: ⚡⚡⚡⚡ Very fast
  • Quality: ⭐⭐⭐ Good for summaries

2. TinyLlama ⭐ SMALLEST OPTION

ollama pull tinyllama
  • Size: ~637MB!
  • RAM needed: 2GB+ system RAM
  • Speed: ⚡⚡⚡⚡⚡ Extremely fast
  • Quality: ⭐⭐ Basic but usable

3. Gemma 2:2B ⭐ BALANCED SMALL MODEL

ollama pull gemma2:2b
  • Size: ~1.4GB
  • RAM needed: 3GB+ system RAM
  • Speed: ⚡⚡⚡⚡ Very fast
  • Quality: ⭐⭐⭐ Good quality

Best Overall Models

1. Mistral ⭐ BEST OVERALL

ollama pull mistral
  • Excellent at following instructions (perfect for "20 words or less")
  • Fast inference speed
  • Good balance of quality and speed
  • Memory efficient (~4GB)

2. Llama 2 ⭐ MOST RELIABLE

ollama pull llama2
ollama pull llama2:13b  # For better quality
  • Very reliable and consistent
  • Good instruction following
  • Multiple sizes available (7b, 13b, 70b)

3. Llama 3 ⭐ NEWEST & FAST

ollama pull llama3
  • Latest and most capable
  • Very fast inference
  • Excellent instruction following
  • Better at handling long contexts

📊 Model Comparison

Model Size RAM Needed Speed Quality Best For
tinyllama 637MB 2GB+ ⚡⚡⚡⚡⚡ ⭐⭐ Very old PCs
phi3:mini 2.3GB 4GB+ ⚡⚡⚡⚡ ⭐⭐⭐ Recommended!
gemma2:2b 1.4GB 3GB+ ⚡⚡⚡⚡ ⭐⭐⭐ Balanced option
mistral 4GB 4GB+ ⚡⚡⚡ ⭐⭐⭐⭐ Best overall
llama2 4GB 4GB+ ⚡⚡ ⭐⭐⭐⭐ Most reliable
llama3 5GB 5GB+ ⚡⚡⚡ ⭐⭐⭐⭐⭐ Latest & best

Troubleshooting

Common Issues

1. Ollama Not Running

# Check if Ollama is running
ollama serve

# Or test the API
curl http://localhost:11434/api/tags

2. Model Not Found

# List available models
ollama list

# Pull the model you need
ollama pull phi3:mini

3. Empty Response

  • Model is still loading (first use)
  • Restart Ollama: pkill ollama && ollama serve
  • Verify model works: ollama run phi3:mini "Summarize this in 10 words: AI is changing everything"

4. Content Too Short

  • Article content extracted is less than 50 characters
  • Try articles with more content
  • Some RSS feeds only provide titles, not full content

Model Recommendations by Use Case

  • For Speed & Quality Balance: mistral
  • For Best Quality: llama2:13b or llama3:8b
  • For Limited Memory: phi3:mini
  • For Technical Articles: codellama

Production Deployment

Quick Start

  1. Install Dependencies
npm install
  1. Build Frontend
npm run build
  1. Start Proxy Server
npm run proxy
  1. Start Frontend (Production)
npm run start

Environment Variables

Frontend (.env)

# Ollama Configuration
VITE_OLLAMA_API_URL="http://localhost:11434"
VITE_OLLAMA_MODEL="phi3:mini"

Backend (Proxy Server)

# Server Configuration
PORT=3001
NODE_ENV=production

# Performance Tuning
MAX_RESPONSE_SIZE=10485760  # 10MB
REQUEST_TIMEOUT=10000       # 10 seconds

# Rate Limiting
RATE_LIMIT_WINDOW=60000          # 1 minute
RATE_LIMIT_MAX_REQUESTS=100      # 100 requests per window

# CORS
CORS_ORIGIN=https://yourdomain.com  # Use '*' for development only

# Trusted Proxy IPs
TRUSTED_PROXY=127.0.0.1,::1

# Health Check
HEALTH_CHECK_FEED=https://rss.cnn.com/rss/edition.rss

# Logging
LOG_LEVEL=INFO  # DEBUG, INFO, WARN, ERROR

Docker Deployment

Prerequisites

  • Docker and Docker Compose installed
  • Ollama installed and running on your host PC (not in Docker)
  • At least one Ollama model pulled (e.g., ollama pull gemma3:1b)

Quick Start with Docker Compose

  1. Clone the repository:

    git clone <your-repo-url>
    cd rss-ai-reader
  2. Set up environment variables:

    cp env.example .env
    # Edit .env if needed (defaults work for most setups)
  3. Ensure Ollama is running on your host:

    # Check if Ollama is running
    curl http://localhost:11434/api/tags
    
    # If not running, start it
    ollama serve
  4. Pull your preferred model (if not already done):

    ollama pull gemma3:1b  # or phi3:mini, mistral, etc.
  5. Build and start containers:

    docker-compose up -d --build
  6. Access the application:

Docker Compose Services

The docker-compose.yml includes two services:

  • Frontend: React app served by nginx (port 3000)
  • Proxy: Express.js CORS proxy for RSS feeds (port 3001)

Configuration

Environment Variables (.env)

Key variables for Docker deployment:

# AI Configuration
VITE_AI_PROVIDER=ollama
VITE_OLLAMA_API_URL=http://localhost:11434  # Ollama on host PC
VITE_OLLAMA_MODEL=gemma3:1b  # Your preferred model

# Proxy Configuration
VITE_PROXY_URL=http://localhost:3001/api/proxy

# Docker Ports
FRONTEND_PORT=3000
PROXY_PORT=3001

# Proxy Server Settings
CORS_ORIGIN=*
LOG_LEVEL=INFO

Important Notes

  • Ollama runs on host: The frontend (running in your browser) connects directly to Ollama on localhost:11434
  • VITE_ variables are build-time*: Changes require rebuilding the frontend container
  • Proxy runs in Docker: Handles RSS feed requests to avoid CORS issues

Docker Commands

# Start services
docker-compose up -d

# View logs
docker-compose logs -f

# View logs for specific service
docker-compose logs -f frontend
docker-compose logs -f proxy

# Stop services
docker-compose down

# Rebuild after .env changes
docker-compose build frontend
docker-compose up -d

# Rebuild everything
docker-compose up -d --build

# Check container status
docker-compose ps

# View resource usage
docker stats

Individual Docker Builds

You can also build and run containers individually:

Frontend Only

# Build frontend
docker build -f Dockerfile.frontend -t rss-ai-reader-frontend .

# Run frontend
docker run -d \
  -p 3000:80 \
  --name rss-ai-frontend \
  rss-ai-reader-frontend

Proxy Only

# Build proxy
docker build -f Dockerfile.proxy -t rss-ai-reader-proxy .

# Run proxy
docker run -d \
  -p 3001:3001 \
  --name rss-ai-proxy \
  -e CORS_ORIGIN=* \
  rss-ai-reader-proxy

Troubleshooting Docker Deployment

Frontend shows "Cannot connect to Ollama"

  1. Verify Ollama is running:

    curl http://localhost:11434/api/tags
  2. Check model exists:

    ollama list
  3. Verify .env configuration:

    cat .env | grep VITE_OLLAMA
  4. Rebuild frontend (VITE_* vars are build-time):

    docker-compose build frontend
    docker-compose up -d

CSP (Content Security Policy) Errors

If you see CSP errors in browser console:

  • The nginx.conf includes proxy URLs in CSP
  • Rebuild frontend after any CSP changes: docker-compose build frontend

Proxy Connection Errors

  1. Check proxy is running:

    curl http://localhost:3001/health
  2. Check proxy logs:

    docker-compose logs proxy
  3. Verify proxy port mapping:

    docker-compose ps

Model Not Found Errors

  1. Ensure model is pulled:

    ollama pull gemma3:1b  # or your model name
  2. Verify model name matches .env:

    # Check .env
    grep VITE_OLLAMA_MODEL .env
    
    # Check available models
    ollama list
  3. Rebuild frontend (model name is embedded at build time):

    docker-compose build frontend
    docker-compose up -d

Production Considerations

For production deployments:

  1. Set specific CORS origin (not *):

    CORS_ORIGIN=https://yourdomain.com
  2. Use HTTPS with reverse proxy (Traefik, nginx, etc.)

  3. Configure Traefik labels in docker-compose.yml for your domain

  4. Set appropriate rate limits based on expected traffic

  5. Monitor container health:

    docker-compose ps
    docker stats

Architecture

┌─────────────────────────────────────────┐
│  Host PC                                 │
│  ┌──────────────┐  ┌─────────────────┐ │
│  │   Browser    │  │    Ollama       │ │
│  │  (Frontend)  │──│  (Port 11434)   │ │
│  └──────────────┘  └─────────────────┘ │
│         │                                │
│         │ HTTP                           │
│         ▼                                │
│  ┌─────────────────────────────────┐   │
│  │  Docker Containers               │   │
│  │  ┌──────────┐  ┌──────────────┐ │   │
│  │  │Frontend  │  │    Proxy     │ │   │
│  │  │(nginx)   │  │  (Express)   │ │   │
│  │  │:3000     │  │    :3001     │ │   │
│  │  └──────────┘  └──────────────┘ │   │
│  └─────────────────────────────────┘   │
└─────────────────────────────────────────┘

Key Points:

  • Frontend runs in browser (on host) → connects to Ollama (on host)
  • Proxy runs in Docker → handles RSS feed requests
  • All services communicate via exposed ports

Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rss-ai-reader
spec:
  replicas: 2
  selector:
    matchLabels:
      app: rss-ai-reader
  template:
    metadata:
      labels:
        app: rss-ai-reader
    spec:
      containers:
      - name: frontend
        image: rss-ai-reader:latest
        ports:
        - containerPort: 3000
        env:
        - name: VITE_OLLAMA_API_URL
          value: "http://localhost:11434"
        - name: PORT
          value: "3000"

Security Best Practices

  1. Rate Limiting: Default 100 requests/minute per IP
  2. SSRF Protection: Private IP ranges blocked, internal hostnames blocked
  3. Response Size Limits: Default 10MB to prevent memory exhaustion
  4. CORS Configuration: Set specific domain (not '*') in production
  5. Error Handling: Errors sanitized (no internal details leaked)

Production Checklist

Security

  • Environment variables validated and configured
  • HTTPS enabled (reverse proxy with TLS)
  • CORS origin set to specific domain (not '*')
  • Trusted proxy IPs configured
  • Rate limiting configured appropriately
  • Security headers set (HSTS, CSP, etc.)

Reliability

  • Health check endpoint accessible and tested
  • Graceful shutdown tested
  • Memory limits configured
  • Request timeout configured
  • Error tracking configured

Observability

  • Logging configured (LOG_LEVEL=INFO or WARN for production)
  • Structured logs being collected
  • Monitoring set up
  • Alerting configured

Production Fixes Applied

Critical Issues Fixed

  1. Rate Limiting Memory Leak: TTL-based cleanup with periodic backup cleanup
  2. Secure Request ID Generation: Using crypto.randomUUID() instead of Math.random()
  3. IP Spoofing Protection: Validates trusted proxies, uses rightmost entry in chain
  4. CORS Credentials Issue: Only enable credentials when origin is not wildcard
  5. Health Check Improvements: Tests actual RSS feed with proper validation
  6. Environment Variable Validation: Validates all env vars with proper defaults
  7. Graceful Shutdown: Tracks connections, waits for completion, max 10s timeout
  8. Memory Monitoring: Warns when memory > 80% threshold

Project Structure

src/
├── components/          # React components
│   ├── Header.tsx      # App header with refresh
│   ├── FeedList.tsx    # RSS feed list display
│   ├── ArticleDetail.tsx # Article detail view
│   └── PromptSelector.tsx # AI prompt management
├── services/           # API services
│   ├── rssService.ts   # RSS feed parsing
│   └── aiService.ts    # Ollama AI integration
├── types/              # TypeScript type definitions
│   └── index.ts
├── App.tsx            # Main app component
├── main.tsx           # App entry point
└── index.css          # Global styles

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

About

This project is built with privacy, data sovereignty, and EU compliance in mind.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •