This example provides an interface for asking questions to a PDF document.
A ready to use 100% local setup
- ollama for mac installed.
- Docker Desktop with 12GB RAM allocated
# pip install -r requirements.txt
# pipenv install -r requirements.txt
# pipenv requirements > requirements.txt
# setup virtualenv
pipenv shell
# Install from Pipfile
pipenv install
ollama pull mistral
ollama pull llama2
# verify
ollama list
ollama run mistral
# Generate a response
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?"
}'
curl -X POST http://localhost:11434/api/generate -d '{
"model": "mistral",
"prompt": "Why is the sky blue?",
"stream": false
}'
# (OR) Chat with a model
curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
Start via ollama via docker, if you are not running it via CLI
docker compose up
open http://localhost:11434/
Now you can run a model like mistral inside the container.
docker exec -it ollama ollama run mistral
Test if base model respond
curl -X POST http://localhost:11434/api/generate -d '{
"model": "mistral",
"prompt": "Why is the sky blue?",
"stream": false
}'
pipenv run python main.py
# (Or) You can activate the virtual environment then run the file
pipenv shell
python main.py
A prompt will appear, where questions may be asked:
Query: How many locations does WeWork have?