Skip to content

πŸ” AI search engine - self-host with local or cloud LLMs

License

Notifications You must be signed in to change notification settings

quantumalchemy/farfalle-videos

Β 
Β 

Repository files navigation

Farfalle - Videos

This fork swaps out Images sec. for Videos

Added Dockerfile build for using your own Searxng conatiner

Open-source AI-powered search engine. (Perplexity Clone)

Run local LLMs (llama3, gemma, mistral, phi3), custom LLMs through LiteLLM, or use cloud models (Groq/Llama3, OpenAI/gpt4-o)

farfalle-expert-search.mp4

Please feel free to contact me on Twitter or create an issue if you have any questions.

πŸ’» Live Demo

farfalle.dev (Cloud models only)

πŸ“– Overview

πŸ›£οΈ Roadmap

  • Add support for local LLMs through Ollama
  • Docker deployment setup
  • Add support for searxng. Eliminates the need for external dependencies.
  • Create a pre-built Docker Image
  • Add support for custom LLMs through LiteLLM
  • Chat History
  • Expert Search
  • Chat with local files

πŸ› οΈ Tech Stack

Features

  • Search with multiple search providers (Tavily, Searxng, Serper, Bing)
  • Answer questions with cloud models (OpenAI/gpt4-o, OpenAI/gpt3.5-turbo, Groq/Llama3)
  • Answer questions with local models (llama3, mistral, gemma, phi3)
  • Answer questions with any custom LLMs through LiteLLM
  • Search with an agent that plans and executes the search for better results

πŸƒπŸΏβ€β™‚οΈ Getting Started Locally

Prerequisites

  • Docker
  • Ollama (If running local models)
    • Download any of the supported models: llama3, mistral, gemma, phi3
    • Start ollama server ollama serve

Get API Keys

Quick Start:

git clone https://github.com/rashadphz/farfalle.git
cd farfalle && cp .env-template .env

Modify .env with your API keys (Optional, not required if using Ollama)

Start the app:

docker-compose -f docker-compose.dev.yaml up -d

Wait for the app to start then visit http://localhost:3000.

For custom setup instructions, see custom-setup-instructions.md

πŸš€ Deploy

Backend

Deploy to Render

After the backend is deployed, copy the web service URL to your clipboard. It should look something like: https://some-service-name.onrender.com.

Frontend

Use the copied backend URL in the NEXT_PUBLIC_API_URL environment variable when deploying with Vercel.

Deploy with Vercel

And you're done! πŸ₯³

Use Farfalle as a Search Engine

To use Farfalle as your default search engine, follow these steps:

  1. Visit the settings of your browser
  2. Go to 'Search Engines'
  3. Create a new search engine entry using this URL: http://localhost:3000/?q=%s.
  4. Add the search engine.

About

πŸ” AI search engine - self-host with local or cloud LLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 68.4%
  • Python 27.6%
  • Dockerfile 2.3%
  • CSS 0.7%
  • JavaScript 0.5%
  • Mako 0.3%
  • Shell 0.2%