- Chase Jamieson
- Question 2 code implementation
- Bioasq datasets study
- Chia Wei Hsu
- Question 1 code implementation
- Ollama backend implementation
- Project status tracking
- Text UI, merged code env maintenance
- Satvik Mudgal
- Question 3 code implementation
- Streamlit user interface implementation
- This bundle requires
conda. - Conda environment is listed in the file
environment.yml. - Pip dependencies are listed in the file
requirements.txt. - If you wish to use Ollama as the LM backend, you must install Ollama:
curl -fsSL https://ollama.com/install.sh | sh(Linux)brew install ollama(if you are on Mac)- then run
ollama serve
You can download the embeddings on the releases page to avoid running the embeddings on your machine.
https://github.com/planetaska/rag6/releases/tag/embeddings
After download, simply unzip the file then put the directory ragqa_bge_index and any additional embeddings under embeddings/.
- Create a local .env file from the example and add your credentials
cp .env.example .env
# Then update the .env file with your own keys
- Rebuild the environment:
# rebuild env
conda env create -f environment.yml
# activate
conda activate rag6
# run the program
python main.py
- (Optional) If you wish to use Ollama locally:
ollama serve
# Remember to pull the llama model in another terminal:
ollama pull llama3.2:3b
- Navigate to the root directory /rag6
- Use conda to activate the virtual environment and install dependencies
conda activate rag6
- Use the command
streamlit run streamlit_app.pyto run the UI - For each dataset, the model will take time to initialize and embed the vectors.
- After the initialization, ask you question.