Skip to content

darttechwala/ChatbotAPI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– AI Chatbot Backend (Python + Ollama + TTS)

This project uses a fully open-source AI stack to provide:

πŸ’¬ Streaming AI chat replies

🎀 Voice output (Text-to-Speech)

🧠 Local LLM via Ollama

🌐 FastAPI backend

πŸ“¦ Requirements

Ollama (Local LLM Server)

Install Ollama:

After installing, pull a model:

  • ollama pull llama3

Start Ollama server:

Python Dependencies

Python (Recommended) Python 3.11.x -> (Required for TTS compatibility)

Install required packages:

  • pip install fastapi uvicorn requests TTS

πŸš€ Running the Backend

Start FastAPI server:

  • python -m uvicorn main:app --reload

Backend will run on:

πŸ” Chat Streaming Endpoint

  • POST /chat-stream

  • Streams AI response from Ollama in real-time.

πŸ”Š Voice (Text-to-Speech)

  • This project uses Coqui TTS (open-source) for natural AI voice.

Example model:

  • tts_models/en/ljspeech/glow-tts

Voice is generated after full AI message is received.

  • 🌍 Web Support (CORS Enabled)

  • FastAPI is configured with CORS to support.

🧠 Stack Overview

βœ… Features

  • Fully offline capable (local AI)

  • Streaming chat responses

  • Voice replies

  • Multi-platform (Android, iOS, Web, macOS, Windows)

  • Open-source stack

⚠ Notes

Ollama must be running before starting backend

About

Chatbot API - Cosuming Ollama AI and Coqui TTS model for voice process

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages