This project is a simple AI application that uses a locally hosted LLaMA model (via Ollama) to summarize text.
- FastAPI backend
- Streamlit frontend
- Local LLaMA inference using Ollama
- Clone the repository
- Install dependencies: pip install -r requirements.txt
- Start backend: uvicorn backend.main:app --reload
- Start frontend: streamlit run frontend/app.py