This repository contains a full-stack AI-powered chat application built with the following components:
- Backend: Python FastAPI server that handles API requests and integrates with OpenAI's GPT-3.5 Turbo model.
- Frontend: A simple HTML and JavaScript-based user interface for interacting with the chat application.
- Utilities: Shell scripts and Python scripts for managing server processes and combining text data.
The project demonstrates seamless integration of AI capabilities into a modern web application.
server.py
: The FastAPI server implementation, providing a/chat/
endpoint to process user input and return AI responses.index.html
: The main frontend file providing a user interface for the chat application.script.js
: Handles user interactions on the frontend and communicates with the backend API.start_server.sh
: A shell script to kill any process running on port 8000 and restart the FastAPI server.combiner.py
: Utility script to combine content from all files in the current directory into a single text file.test_transformers.py
: A standalone script showcasing text generation using the Hugging Face GPT-2 model..DS_Store
: Automatically generated system file (ignored in most cases).README.md
: This documentation file.combined_output.txt
: Output file generated bycombiner.py
containing combined content from all files.
- Python 3.8+
fastapi
uvicorn
pydantic
openai
transformers
Install dependencies using:
pip install -r requirements.txt
- A modern web browser.
-
Set OpenAI API Key: Ensure the
OPENAI_API_KEY
environment variable is set in your shell. Example:export OPENAI_API_KEY="your_openai_api_key"
-
Start the Server: Use the provided shell script to start the FastAPI server:
./start_server.sh
The server will be accessible at http://127.0.0.1:8000.
- Open
index.html
in your browser. - Interact with the application by typing messages and viewing AI responses.
The combiner.py
script reads all files in the current directory, extracts their content, and saves it in combined_output.txt
. Run it using:
python combiner.py
The test_transformers.py
script demonstrates text generation using Hugging Face GPT-2. Execute it as:
python test_transformers.py
Ensure the server is running on http://127.0.0.1:8000 for the frontend to function.
For production, consider deploying using services like AWS, GCP, or Azure and securing API keys and endpoints.