Skip to content

xmlking/ai-experiments

Repository files navigation

LangChain Document QA

This example provides an interface for asking questions to a PDF document.

A ready to use 100% local setup

Prerequisites

  1. ollama for mac installed.
  2. Docker Desktop with 12GB RAM allocated

Setup

# pip install -r requirements.txt
# pipenv install -r requirements.txt
# pipenv requirements > requirements.txt
# setup virtualenv
pipenv shell
# Install from Pipfile
pipenv install

Pull some models

ollama pull mistral
ollama pull llama2
# verify
ollama list

Run a model

ollama run mistral

Call REST API

# Generate a response
curl http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "mistral",
  "prompt": "Why is the sky blue?",
  "stream": false
}'

# (OR) Chat with a model
curl http://localhost:11434/api/chat -d '{
  "model": "mistral",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

Run

Start Ollama

Start via ollama via docker, if you are not running it via CLI

docker compose up
open http://localhost:11434/

Run a model

Now you can run a model like mistral inside the container.

docker exec -it ollama ollama run mistral

Verify

Test if base model respond

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "mistral",
  "prompt": "Why is the sky blue?",
  "stream": false
}'

Start RAG

pipenv run python main.py
# (Or) You can activate the virtual environment then run the file
pipenv shell
python main.py

A prompt will appear, where questions may be asked:

Query: How many locations does WeWork have?

About

LLM mistral

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages