The primary objective of this project is to develop a chatbot that leverages Large Language Models (LLM) to assist users in planning their travels. The chatbot aims to provide personalized recommendations, detailed itineraries, and useful travel tips, making the travel planning process easier and more efficient.
- Frontend: Streamlit for building the user interface.
- Backend: Ollama LLM model running on a local machine for natural language processing and query handling.
- src/: Contains Python Streamlit-based chatbot scripts.
- report/: Stores Report files.
- video/: Contains video presentation. You can also watch the video on YouTube.
- Python 3.7+
- streamlit
- request
- Clone the repository:
git clone https://github.com/Faridghr/Chatbot-LLM-Interaction.git
- Navigate to the project directory:
cd Chatbot-LLM-Interaction
- Install dependencies:
pip install -r requirements.txt
- Run the Streamlit application:
streamlit run travel-ai-assistant-chatbot.py
- Open your web browser and navigate to the URL provided by Streamlit (usually http://localhost:8501).
- Interact with the chatbot by typing messages and receiving responses from the local LLM service.
Ensure your local language model service is running:
- Download the Ollama installer for macOS.
- Extract the downloaded ZIP file.
- Open Terminal and navigate to the extracted folder.
- Run the following command to install Ollama:
./install.sh
- Download the Ollama for Windows setup executable.
- Run the downloaded executable file and follow the on-screen instructions to complete the installation.
- Open a terminal window.
- Run the following command to install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
For further instructions, visit the Ollama GitHub page.
Start your local instance of the LLM service (e.g., llama). Example command to start the service: ollama serve --model llama3
This LLM model was originally published by Ollama.