claude_chat.mp4
This is a chatbot project that consists of two separate servers: a chat server and a front server. The front server is responsible for handling the user interface, while the chat server handles the chatbot functionality.
To get started with this project, follow these steps:
-
Install Dependencies
Make sure you have the required dependencies installed. You can install them by running the following command:
curl -fsSL https://ollama.com/install.sh | sh -
Start the Front Server
Navigate to the front server directory and run the main.py file:
python main.py
-
Start the Chat Server
Open three separate terminal sessions and run the following commands in each:
Terminal 1:
python main.py
Terminal 2:
install llama3:8b-instruct-q2_K and start ollama server
OR use Claude API
Note: The current chatbot is running with the llama3:8b-instruct-q2_K model.
-
Use DB to store userId and sessionId and chatlogs.
-
Test on a larger models for better chat experience.
-
Deploy the application to a production server for external access.
-
Integrating additional features and improvements, such as improved error handling, better security measures, and performance optimizations.
Currently, the project has been tested on local docker environment(with multiple docker containers), and further testing will be conducted in different environments.