Skip to content


Repository files navigation


Open-source AI-powered search engine. (Perplexity Clone)

Run local LLMs (llama3, gemma, mistral, phi3), custom LLMs through LiteLLM, or use cloud models (Groq/Llama3, OpenAI/gpt4-o)

Demo answering questions with phi3 on my M1 Macbook Pro:


Please feel free to contact me on Twitter or create an issue if you have any questions.

πŸ’» Live Demo (Cloud models only)

πŸ“– Overview

πŸ›£οΈ Roadmap

  • Add support for local LLMs through Ollama
  • Docker deployment setup
  • Add support for searxng. Eliminates the need for external dependencies.
  • Create a pre-built Docker Image
  • Add support for custom LLMs through LiteLLM
  • Chat History
  • Chat with local files

πŸ› οΈ Tech Stack


  • Search with multiple search providers (Tavily, Searxng, Serper, Bing)
  • Answer questions with cloud models (OpenAI/gpt4-o, OpenAI/gpt3.5-turbo, Groq/Llama3)
  • Answer questions with local models (llama3, mistral, gemma, phi3)
  • Answer questions with any custom LLMs through LiteLLM

πŸƒπŸΏβ€β™‚οΈ Getting Started Locally


  • Docker
  • Ollama (If running local models)
    • Download any of the supported models: llama3, mistral, gemma, phi3
    • Start ollama server ollama serve

Get API Keys

Quick Start:

docker run \
    -p 8000:8000 -p 3000:3000 -p 8080:8080 \
    --add-host=host.docker.internal:host-gateway \


  • OPENAI_API_KEY: Your OpenAI API key. Not required if you are using Ollama.
  • SEARCH_PROVIDER: The search provider to use. Can be tavily, serper, bing, or searxng.
  • OPENAI_API_KEY: Your OpenAI API key. Not required if you are using Ollama.
  • TAVILY_API_KEY: Your Tavily API key.
  • SERPER_API_KEY: Your Serper API key.
  • BING_API_KEY: Your Bing API key.
  • GROQ_API_KEY: Your Groq API key.
  • SEARXNG_BASE_URL: The base URL for the SearXNG instance.

Add any env variable to the docker run command like so:

docker run \
    -p 8000:8000 -p 3000:3000 -p 8080:8080 \
    --add-host=host.docker.internal:host-gateway \

Wait for the app to start then visit http://localhost:3000.

or follow the instructions below to clone the repo and run the app locally

1. Clone the Repo

git clone
cd farfalle

2. Add Environment Variables

touch .env

Add the following variables to the .env file:

Search Provider

You can use Tavily, Searxng, Serper, or Bing as the search provider.

Searxng (No API Key Required)


Tavily (Requires API Key)


Serper (Requires API Key)


Bing (Requires API Key)



# Cloud Models

# See for the full list of supported models

3. Run Containers

This requires Docker Compose version 2.22.0 or later.

docker-compose -f up -d

Visit http://localhost:3000 to view the app.

For custom setup instructions, see

πŸš€ Deploy


Deploy to Render

After the backend is deployed, copy the web service URL to your clipboard. It should look something like:


Use the copied backend URL in the NEXT_PUBLIC_API_URL environment variable when deploying with Vercel.

Deploy with Vercel

And you're done! πŸ₯³

Use Farfalle as a Search Engine

To use Farfalle as your default search engine, follow these steps:

  1. Visit the settings of your browser
  2. Go to 'Search Engines'
  3. Create a new search engine entry using this URL: http://localhost:3000/?q=%s.
  4. Add the search engine.