This is an experiment in building a large-language-model-backed chatbot. It can hold a conversation, remember previous comments/questions, and answer all types of queries (history, web search, movie data, weather, news, and more).
This app relies on the amazing LangChain Python library, which powers all the interesting AI stuff.
For those who want to use this to it's fullest ability you'll need to get API keys.
Key Name | Where to Get It | Works? |
---|---|---|
OPENAI_API_KEY | https://openai.com | Yes |
SERPAPI_API_KEY | https://serpapi.com/ | Yes |
GIPHY_API_KEY | https://developers.giphy.com/ | Mostly |
NEWS_API_KEY | https://newsapi.org | Yes |
TMDB_API_KEY | https://www.themoviedb.org/settings/api | Yes |
WOLFRAM_ALPHA_APPID | https://products.wolframalpha.com/api | Yes |
GOOGLE_API_KEY | https://console.cloud.google.com/apis/credentials | Todo |
GOOGLE_CSE_ID | ??? | Todo |
FOURSQUARE_API_KEY | https://developer.foursquare.com | Yes |
ASTRA_* | https://astra.datastax.com | Works? |
More on GOOGLE (in General) https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search
First, add your API keys in the .env
file.
Then, install the Python requirements and start the app. You can use a Procfile manager like Foreman or Hivemind installed. My prerence was to keep it with the Python theme. Honcho is a tool that does the same thing.
pyenv install 3.9
pyenv local 3.9
pip install --upgrade pip
pip install honcho
To fix node/vite issues
rm -rf node_modules
cd vite
rm -rf node_modules
rm -rf dist/assets/*
npm install -g vite
npm install
yarn global add vite
pip install -r requirements.txt
honcho start -f Procfile.local
Once it's running, open up http://127.0.0.1:9000/ and you'll be able to start interacting with the bot. There's also a writing assistant endpoint at http://127.0.0.1:9000/write.
First, add your API keys in the .env
file.
Then, install the Python requirements and start the app. You can use a Procfile manager like Foreman or Hivemind installed. My prerence was to keep it with the Python theme. Honcho is a tool that does the same thing.
I prepped the Gitpod this way, but also have added it to the .gitpod.yml, which means all this should already be done for you.
pyenv install 3.9
pyenv local 3.9
pip install --upgrade pip
pip install honcho
To fix node/vite issues
rm -rf node_modules
cd vite
rm -rf node_modules
rm -rf dist/assets/*
npm install -g vite
npm install
yarn global add vite
pip install -r requirements.txt
honcho start -f Procfile.local
Once it's running, open up, you'll see a notification that a new port is available, and you can expose the port publicly. You can find it at the bottom of Gitpod.io's VS Code editor. Click on "Ports" and you can see which ports are open and click on them to take you directly to the app. You can also see if the port is open or closed with a lock icon.
First, add your API keys in the .env
file. OPENAI, SERP at the very least. You can comment out the other tools to get it to work. Make sure you read and follow the instructions in Data/Documents/Readme.
Chainlit.py uses Langchain.
chainlit run chainlit.py -w
This will open up another URL / Port which you can start using.
You can also try using the Llamaindex version which loads in PDFs. The sample has the FLARE and ReACT papers, but you can upload any PDFs or other types of files to data/papers
.
chainlit run chainlit_papers.py -w
This will open up another URL / Port which you can start using.
The api endpoint will be up at http://<hostname>/chat
and you can send data to it like this. Replace with the loopback address (127.0.0.1) if you are doing it locally or in gitpod, or the full gitpod preview / API url otherwise.
curl -X POST http://<hostname>/chat \
-H "Content-Type: application/json" \
-d '{"prompt":"Show me a cat gif","model":"text-davinci-003","temperature":0.5, "max_tokens":512,"history":[]}'
If you want to store vast amounts of your data and talk to it, you need to get a vector database setup. Read up on CassIO's Start Here page. Make sure you can run the Colab Notebook
If you are good to go there, follow the instructions in Data/Documents/Readme.
cd tools
wget https://downloads.datastax.com/enterprise/cqlsh-6.8.31-bin.tar.gz
tar xvfz cqlsh-6.8.31-bin.tar.gz
./cqlsh-6.8.31/bin/cqlsh <hostname> -u <username> -p <password>
Either copy and paste data/cassandra.cql
or do this
./cqlsh-6.8.31/bin/cqlsh <hostname> -u <username> -p <password> -f ../data/cassandra.cql
For Connecting to Astra Cassandra as a Service with CQLSH
Q: hi, my name is bob
A: Hi Bob! Nice to meet you! How can I help you today?
Q: what's the last letter of my name?
A: b
Q: show me a cat gif
A: Here is a cat gif: <iframe src="https://giphy.com/embed/ICOgUNjpvO0PC" width="480" height="480" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><br /><a href="https://giphy.com/embed/ICOgUNjpvO0PC">powered by GIPHY</a>
Q: actually, can you make it a dolphin?
A: Here is a dolphin gif: <iframe src="https://giphy.com/embed/11ctq1pDmD3cB2" width="480" height="480" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><br /><a href="https://giphy.com/embed/11ctq1pDmD3cB2">powered by GIPHY</a>
Q: what year was Dr. Strangelove released?
A: Dr. Strangelove was released in 1964.
Q: what's the sum of the first six prime numbers?
A: The sum of the first six prime numbers is 41.
This repository is set up to deploy on Fly.io. You should be able to follow their docs and get it running there very quickly.
This folder has a few attempts at generating/testing LLM examples programmatically. You can probably ignore this unless you're just curious.