Skip to content

Explore Multiple Vector Databases and chat with documents on Multiple LLM models, private LLM models

Notifications You must be signed in to change notification settings

abhishek-ch/VectorVerse

Repository files navigation

🎭 VectorVerse

Ⓒ Unveiling Vector Databases & LLM Models

VectorVerse is an exploratory platform that serves as a hub for exploring the output of various Vector Databases. With VectorVerse, you have the opportunity to delve into the results produced by multiple Vector Databases. Additionally, you can utilize VectorVerse to compare the output generated by multiple Language Model (LLM) Models, including private models. This enables you to gain valuable insights and make informed decisions based on a comprehensive analysis of different data sources and models.


My.Movie.mp4

Key Features 🎯

  • Multiple Vector Databases: VectorVerse let you explore multiple Vector Databases are compare/observe the result.
  • LLM Model: VectorVerse allows you to explore multiple LLM models output like GPT3, GPT4, GPT4All etc.
  • Chat History is maintained using sqlite

Current Support

Vector Databases Support

  1. Qdrant
  2. Chroma DB
  3. Elasticsearch
  4. Redis
  5. FAISS

Current LLM Models Support

  1. GPT3
  2. GPT4
  3. GPT4All
  4. LLama

🌵 Environment Setup

Create a .env file (template provided as example.env) and update the following

Then, download the LLM model and place it in a directory of your choice:

  • LLM: default to ggml-gpt4all-j-v1.3-groovy.bin. If you prefer a different GPT4All-J compatible model, just download it and reference it in your .env file.
OPENAI_API_KEY=*****
OPENAI_API_BASE=****
OPENAI_API_TYPE=azure
OPENAI_API_VERSION=2023-03-15-preview
MODEL_TYPE=supports LlamaCpp or GPT4All
LLAMA_EMBEDDINGS_MODEL=/path/to/ggml-model-q4_0.bin
MODEL_PATH=/path/to/ggml-gpt4all-j-v1.3-groovy.bin
db_persistent_path=is the folder you want your vectorstore in
collection_name=examples
pdf_uploadpath=OPTIONAL

Note: because of the way langchain loads the SentenceTransformers embeddings, the first time you run the script it will require internet connection to download the embeddings model itself.

💾 Installation

Docker

Run docker-compose up and browse http://localhost:8501

From Project

  1. Git clone the project

  2. Navigate to the directory where the repository was downloaded

    cd vectorverse
  3. Install the required dependencies

    pip install -r requirements.txt
  4. Run run_es.sh, run_pg.sh & run_redis.sh or set up your own

  5. Run the project and access the url http://localhost:8501

    python -m verse
    

⛄ Optional (If using OpenAI)

Configure OpenAI Key * If Using OpenAI key, simply export OPENAI_API_KEY=***** * If want to use config file, rename example.env -> .env file inside the vectorverse dir & update either Azure or OpenAI config

By completing these steps, you have properly configured the API Keys for your project.

Prerequisites Installation

  • Redis Stack Server
  • ElasticSearch

Or use docker-compose.yml provided with the code

Run the Tool

  1. Check out the project and go the project root dir VectorVerse

  2. If Redis/ES not preinstalled

docker compose up
  1. Launch the app
python -m verse

References

  1. Powered by Langchain
  2. Uploader Inspired by Quivr