Extract recipes from Instagram Reels and save them as organized Markdown files.
- 📥 Paste any public Instagram Reel URL with a recipe
- 🤖 AI extracts ingredients (with checkboxes) and instructions
- ⚡ Real-time streaming as the recipe is generated
- 💾 Save recipes as Markdown files
- 📚 Browse your saved recipe collection
Before running this project, you need:
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.com/install.sh | sh
# Pull the llama3.2 model
ollama pull llama3.2
# Start Ollama (if not running)
ollama servepip install yt-dlp- Node.js 18+
- Python 3.10+
The fastest way to get running:
# Make sure Ollama is running with llama3.2
ollama serve &
ollama pull llama3.2
# Start the app
docker-compose up --buildThen visit http://localhost:3000
Note: Ollama must run on the host machine (not in Docker) for GPU access.
git clone https://github.com/yourusername/InstagramRecipeExtractor.git
cd InstagramRecipeExtractorcd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Start the server
uvicorn main:app --reload --port 8000cd frontend
# Install dependencies
npm install
# Start the dev server
npm run devVisit http://localhost:5173 in your browser.
- Find an Instagram Reel with a recipe
- Copy the URL (e.g.,
https://www.instagram.com/reel/ABC123/) - Paste it into the input field
- Click Analyze and watch the recipe stream in
- Click Save as Markdown to keep it
InstagramRecipeExtractor/
├── backend/ # FastAPI Python server
│ ├── main.py # API endpoints
│ ├── scraper.py # yt-dlp wrapper
│ └── requirements.txt
├── frontend/ # React + Vite + TypeScript
│ └── src/
│ ├── App.tsx # Main extraction page
│ ├── RecipesPage.tsx
│ └── api.ts # API client
└── recipes/ # Saved recipes (auto-created)
| Issue | Solution |
|---|---|
| "yt-dlp not found" | Install with pip install yt-dlp and ensure it's in PATH |
| "Could not extract description" | Make sure the Instagram Reel is public |
| Ollama connection error | Run ollama serve to start the Ollama server |
| CORS errors | Ensure backend is running on port 8000 |
MIT