Skip to content

ArtCC/freelingo

Repository files navigation

FreeLingo

License Next.js Python Self-hosted

FreeLingo logo

Open source, self-hosted, Dockerized web application for learning languages with AI. A local language model (Ollama) evaluates your CEFR level, generates a personalized study plan, and guides you through grammar, vocabulary, reading comprehension, and writing lessons.

The study plan follows a CEFR-aligned curriculum (A1-C2) organized into units with clear competencies and prerequisites. After a deterministic placement assessment, FreeLingo creates a weekly roadmap based on your selected intensity (4, 8, 12, or 16 weeks), then unlocks lessons in sequence: grammar, vocabulary, reading, writing, and review.

The platform combines structure and adaptation: lessons are generated within curriculum boundaries, flashcards use SM-2 spaced repetition, and the AI tutor provides contextual streaming feedback in English (with optional brief support in the learner's native language). Progress tracking includes XP, streaks, skill scores, unit competencies, and an end-of-level completion test.

Architecture

Monorepo: backend/ (Python FastAPI) + frontend/ (Next.js 16 App Router) deployed via Docker Compose with PostgreSQL 16 and Redis 7. The backend proxies all external services (Ollama, Kokoro, Whisper) — the frontend never calls them directly.

Repository

freelingo/
├── assets/             # Logos and static assets
├── backend/            # FastAPI (Python)
├── docker/             # Custom Dockerfiles (e.g. Kokoro with Blackwell GPU support)
├── docs/               # GitHub Pages landing site
├── frontend/           # Next.js (React)
├── messages/           # i18n translation files (en, es, fr, pt, de, it)
├── specs/              # Specification files
├── AGENTS.md
├── CHANGELOG.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
└── docker-compose.yml

Stack

Layer Technology
Frontend Next.js 16+, shadcn/ui, Tailwind CSS, Zustand, next-intl
Backend FastAPI, SQLAlchemy async, Alembic, Pydantic v2
Database PostgreSQL 16
Cache Redis 7
LLM Ollama (local) · OpenAI · Anthropic · DeepSeek
TTS (P2) Kokoro-FastAPI
STT (P2) faster-whisper
Auth JWT (access + refresh), multi-user, roles (admin/user).
Deploy Docker Compose

Phases

Phase Name Status
1 Learning platform ✅ Complete
1+ Learning Resources Hub ✅ Complete
2 Local TTS + STT ✅ Complete
3 Real-time conversation 🚧 In progress

Quick start

Option A — Git clone + Docker Compose

Requirements: Docker, Docker Compose, Git, and Ollama running on the host.

# 1. Clone the repository
git clone https://github.com/artcc/freelingo.git
cd freelingo

# 2. Configure environment
cp .env.example .env
# Edit .env: set OLLAMA_BASE_URL, choose your model, and review other settings

# 3. Pull the recommended model (run on the host, not inside Docker)
ollama pull gemma3:12b

# 4. Start all services (migrations run automatically on first start)
docker compose up -d

Access at http://localhost:3000 (or http://<server-ip>:3000).
The first registered user becomes admin automatically.


Option B — Portainer (Stack)

  1. Open Portainer → StacksAdd stack.
  2. Choose Repository and enter the repo URL, or paste the contents of docker-compose.yml directly into the Web editor.
  3. Scroll down to Environment variables and add the variables from .env.example (at minimum: SECRET_KEY, OLLAMA_BASE_URL, POSTGRES_PASSWORD).
  4. Click Deploy the stack.
  5. Access the app at http://<server-ip>:3000. Database migrations run automatically when the backend starts.

Tip: If Ollama runs on the same host as Portainer, set OLLAMA_BASE_URL=http://host.docker.internal:11434. On Linux you may need to add the extra_hosts entry in the compose file (already included by default).

Internal documentation

Operational notes

  • The recommended model for Ollama is gemma3:12b. It can be changed in .env.
  • The backend acts as a proxy for Ollama/TTS/STT calls so the frontend never talks directly to those services.
  • The LLM_PROVIDER field controls the LLM provider: ollama (local, recommended), openai, anthropic, or deepseek.
  • The target language is always English. During registration, the user's native language is asked for flashcard translations and tutor feedback.

Enabling TTS & STT

TTS (Kokoro) and STT (faster-whisper) are disabled by default. Both services are already defined in docker-compose.yml (commented out) and can be activated on either a GPU or a CPU host.

GPU host (NVIDIA)

  1. The kokoro and whisper services are already configured in docker-compose.yml with the NVIDIA deploy block.
  2. The Kokoro image (ghcr.io/artcc/kokoro-fastapi-gpu) ships with PyTorch 2.7+ (cu128), supporting all NVIDIA architectures including Blackwell (RTX 5000 series, sm_120).
  3. Add to .env:
    TTS_ENABLED=true
    STT_ENABLED=true
  4. Restart the stack: docker compose up -d.

CPU-only host

  1. Uncomment the kokoro and whisper services in docker-compose.yml.

  2. Replace the image tags and remove the deploy block for each service:

    Kokoro (TTS):

    kokoro:
      image: ghcr.io/remsky/kokoro-fastapi-cpu:latest
      restart: unless-stopped
      ports:
        - "8880:8880"

    Whisper (STT):

    whisper:
      image: onerahmet/openai-whisper-asr-webservice:latest
      restart: unless-stopped
      ports:
        - "9000:9000"
      environment:
        - ASR_MODEL=base
        - ASR_ENGINE=faster_whisper

    Use ASR_MODEL=base or small on CPU. Larger models (medium, large) are very slow without a GPU.

  3. Add to .env:

    TTS_ENABLED=true
    STT_ENABLED=true
  4. Restart the stack: docker compose up -d.

Notes

  • If TTS or STT is disabled, the backend returns 503 on those endpoints. The frontend gracefully degrades: the listen/record buttons are not rendered.
  • The TTS voice can be changed via TTS_VOICE in .env (default: af_heart). Available voices: af_heart, af_sky (American), bf_emma, bm_george (British).
  • The STT model is controlled by ASR_MODEL inside the whisper container environment variable, not from .env.

Contributing

See CONTRIBUTING.md for guidelines on reporting bugs, suggesting features, and submitting pull requests.

License

Distributed under the Apache 2.0 License.

Author

Arturo Carretero Calvo@artcc

About

Open source, self-hosted, Dockerized web application for learning languages with AI. A local language model (Ollama) evaluates your CEFR level, generates a personalized study plan, and guides you through grammar, vocabulary, reading comprehension, and writing lessons.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Sponsor this project

 

Packages

 
 
 

Contributors