Skip to content

AD1616/rag

Repository files navigation

Local RAG

Chat with your documents, fully local.

Running locally on MacOS

  1. Install ollama
  2. ollama pull nomic-embed-text
    
  3. ollama pull llama3
    
  4. git clone https://github.com/AD1616/rag
    
  5. cd local_scripts
    
  6. pip install -r requirements.txt
    
  7. chmod +x kill.sh
    
  8. ./kill.sh
    
  9. chmod +x start.sh
    
  10. ./start.sh
    

Note that if it says "address already in use", that (likely) means ollama is already running. From the Mac taskbar, you can always quit ollama so you can start and stop from command line. Make sure to run step 12 again if you do this.

image

Your current terminal is now running ollama, and will show any requests made to ollama. To continue with the next steps, keep this terminal running and open a new terminal window. Navigate to the directory where the cloned repository is located.

  1. Upload pdfs to the data directory.
  2. python dense_embeddings.py
    
  3. python sparse_embedding.py
    

If all of the above was done properly, you can now run:

python answer.py <query>

where <query> is your question.

python query.py "Some query relevant to your documents." 

When finished, run:

./local_scripts/kill.sh

Running after initial setup

  1. Upload pdfs to the data directory.
  2. ./local_scripts/start.sh
    
  3. python query.py "Some query relevant to your documents." 
    
  4. ./local_scripts/kill.sh
    

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published