Skip to content

thifreal/ChatbotAPI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

Chatbot API:

1. Overview

This project provides a RESTful API for a question-answering chatbot. The chatbot uses a Retrieval-Augmented Generation (RAG) pipeline to answer questions.

The system is designed to be simple, efficient, and easy to set up, serving as a practical example of building an LLM-powered application.

2. Technology Stack

  • Backend Framework: Python with FastAPI
  • LLM Orchestration: LangChain
  • LLM & Embeddings: OpenAI API (gpt-3.5-turbo, text-embedding-3-small)
  • Vector Store: FAISS (for local, in-memory vector search)
  • Environment Management: pip and requirements.txt

3. Project Structure

chatbotApi/
├── app/
│   ├── main.py           # FastAPI application entry point with the /ask endpoint.
│   ├── chatbot.py        # Contains the core RAG pipeline and chatbot logic.
│   └── ingest.py         # Script to process the source document and create the vector store.
├── .env                  # For storing environment variables like API keys (not version controlled).
├── requirements.txt      # Lists the Python dependencies for the project.
└── README.md             # This documentation file.

4. Setup and Installation

Follow these steps to get the project running on your local machine.

Step 1: Clone the Repository

git clone <your-repository-url>
cd chatbotApi

Step 2: Create a Virtual Environment and Install Dependencies

It's recommended to use a virtual environment to manage project dependencies.

# Create a virtual environment
python -m venv venv

# Activate the virtual environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
venv\Scripts\activate

# Install the required packages
pip install -r requirements.txt

Step 3: Set Up Environment Variables

Create a .env file in the root of the chatbotApi directory. This file will hold your OpenAI API key.

OPENAI_API_KEY="your_openai_api_key_here"

Step 4: Ingest the Data

Before running the API, you need to process the source document and create the vector store. Run the ingestion script from the root directory:

python app/ingest.py

This will read the markdown file, split it into chunks, generate embeddings, and save the FAISS index to the faiss_index/ directory. This step only needs to be done once.

5. Usage

Step 1: Run the API Server

Use Uvicorn to start the FastAPI application:

uvicorn app.main:app --reload

The --reload flag will automatically restart the server when you make changes to the code. The API will be available at http://127.0.0.1:8000.

Step 2: Interact with the API

You can interact with the API using any HTTP client, such as curl, or by using the interactive Swagger UI documentation.

Using the Interactive Docs (Swagger UI)

Navigate to http://127.0.0.1:8000/docs in your web browser. You will see the API documentation where you can test the /ask endpoint directly.

Using curl

Open a new terminal and use the following curl command to ask a question:

curl -X 'POST' \
  'http://127.0.0.1:8000/ask' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "query": "What is the rule about project-based learning?"
}'

The API will return a JSON response containing the answer generated by the chatbot.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors