Skip to content

NoFilterA1/translAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TranslAI 🌐

Real-time streaming translator powered by local LLM via Ollama.

Features

  • Streaming Translation — Get instant translation feedback as text is being translated
  • Delta Translation — Translate text incrementally with context awareness for natural flow
  • Compression & Summarization — Translate and compress text in a single pass
  • Web Interface — Simple, fast frontend for real-time translation
  • 11 Languages Supported — English, Dutch, German, French, Spanish, Polish, Italian, Portuguese, Chinese, Russian
  • Local & Private — Runs entirely on your machine, no external API calls

Quick Start

Prerequisites

  • Ollama installed and running
  • Python 3.10+
  • pip

Installation

# Clone the repo
git clone https://github.com/YOUR_USERNAME/translAI.git
cd translAI

# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Configuration

Copy .env.example to .env and adjust if needed:

cp .env.example .env

Default values:

  • OLLAMA_URL=http://localhost:11434
  • TRANSLAI_MODEL=qwen2.5:1.5b

Run

# Start Ollama (in another terminal)
ollama serve

# Pull the model (first time only)
ollama pull qwen2.5:1.5b

# Start the server
./start.sh
# or: uvicorn app:app --reload

Visit http://localhost:8000 in your browser.

API Endpoints

POST /translate

Translate full text with streaming.

{
  "text": "Привет, мир!",
  "target_lang": "en",
  "source_lang": "Russian"
}

POST /translate-delta

Translate incremental text changes for real-time translation UX.

{
  "delta": "мир",
  "context_ru": "Привет,",
  "context_tr": "Hello,",
  "target_lang": "en"
}

POST /summarize

Translate and compress text simultaneously.

{
  "text": "Long text...",
  "target_lang": "en"
}

GET /status

Check Ollama connection and model availability.

POST/GET/DELETE /log*

Performance logging endpoints for monitoring translation speeds.

Architecture

  • FastAPI — Modern, fast web framework with async streaming
  • Ollama — Local LLM inference engine
  • HTTPX — Async HTTP client for Ollama communication
  • Vanilla JS Frontend — Lightweight, no framework overhead

License

MIT

About

TranslAI - Real-time streaming translator with Ollama

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors