Skip to content

NIAENGD/Lecture-Tools

Repository files navigation

Lecture Tools

A comprehensive platform for lecture capture, transcription, and course management.

Python FastAPI Pydantic MIT License


Table of Contents

  1. ✨ Key Features
  2. 🏁 Quick Start
  3. 🐳 Docker Deployment (Server)
  4. 🧭 Project Tour
  5. 🎛️ Interface Customization
  6. 🛠️ Core Workflows
  7. 🧪 Testing
  8. 🤝 Contributing

✨ Key Features

  • Seamless setup – No build steps or cross-platform workarounds required. Start the project with a single command on any major OS.
  • Dashboard navigation – Move from classes to modules to lectures with a unified interface that keeps relevant actions and statistics close at hand.
  • Managed media pipeline – Lecture audio, transcripts, and slides are automatically organized and maintained in a structured storage layout.
  • Flexible transcription – Run CPU-optimized faster-whisper locally or enable GPU acceleration when available.
  • Multi-language support – Switch between English, 中文, Español, and Français directly from the settings menu.
  • Cross-platform CLI – A Typer-powered assistant for ingestion, metadata review, and automation workflows.

🏁 Lightning-Fast Onboarding

💡 Prerequisite: Nothing! You just run my script and it's gonna set itself up.

  1. Clone & (optionally) isolate dependencies

    git clone https://github.com/NIAENGD/Lecture-Tools.git
    cd Lecture-Tools
    python -m venv .venv && source .venv/bin/activate  # PowerShell: .\.venv\\Scripts\\Activate.ps1
  2. Bootstrap with my launcher

    • Windows: start.bat
    • macOS/Linux: ./start.sh

    Both scripts pamper your environment by creating a virtualenv (if needed), installing from requirements-dev.txt, and launching the CLI so you can immediately explore commands like overview or ingest.

  3. Prefer a bespoke setup? Install dependencies manually:

    pip install -r requirements-dev.txt
  4. Enter the immersive web suite

    python run.py  # or customise: python run.py serve --host 0.0.0.0 --port 9000

    Visit http://127.0.0.1:8000/ to enjoy the modern control centre.

    • Deploying behind a reverse proxy? Provide the mount prefix so API, UI, and static assets resolve correctly:
      python run.py serve --root-path /lecture
      or set LECTURE_TOOLS_ROOT_PATH=/lecture in the environment.
  5. Classic terminal vibes still included

    python run.py overview --style modern
    python run.py overview --style console

🐳 Docker Deployment (Server)

Deploy the Lecture Tools server in a reproducible Docker container. Personal computers can still use the direct setup described above; Docker is now the recommended path for remote servers.

📦 Requirements

  • Docker Engine 24 or newer
  • The Docker Compose plugin (bundled with modern Docker releases)

🚀 One-click install

curl -fsSL https://raw.githubusercontent.com/NIAENGD/Lecture-Tools/main/scripts/docker-install.sh | bash

The installer now guides you through a full production-ready setup:

  • Verifies you are on a supported Debian-based distribution and installs every missing dependency (Docker Engine, Compose plugin, git, curl, etc.).
  • Prompts for the Git repository/branch, installation directory (default /opt/lecture-tools), persistent data directory, HTTP port, application root path (for reverse proxies), and the system user that will own the deployment.
  • Creates a dedicated systemd unit so the stack can automatically start on boot and be managed like a native service.
  • Generates a management CLI named lecturetool under /usr/local/bin with the following sub-commands:
    • lecturetool -enable / lecturetool -disable – toggle auto-start at boot.
    • lecturetool -start / lecturetool -stop – control the running containers.
    • lecturetool -status – view systemd status plus container health.
    • lecturetool -update – pull the latest code and container images, then restart the stack.
    • lecturetool -remove – stop everything, delete persisted data, and uninstall Docker + Compose if the installer added them.

Open http://SERVER_IP:PORT/ once the installer finishes. If you configure a reverse proxy, provide the desired path prefix when prompted so LECTURE_TOOLS_ROOT_PATH is populated automatically.

🔧 Configuration

  • Port mapping – Adjust the 8000:8000 mapping in docker-compose.yml when exposing a different port.
  • Reverse proxies – When terminating TLS with Nginx/Traefik, forward to the container and ensure the LECTURE_TOOLS_ROOT_PATH environment variable matches any prefix you inject (e.g. /lecture).
  • Custom images – Build and push your own tag with docker build -t registry.example.com/lecture-tools:latest . and update the compose file to reference it.

♻️ Updating

sudo lecturetool -update

The update routine will stop the service, pull the latest git commit for the branch you selected during installation, rebuild/pull container images, and restart the stack.

🧹 Removing the stack

sudo lecturetool -remove

This command stops the containers, disables and deletes the systemd service, removes the persisted data directories, and purges Docker/Compose if they were originally installed by the helper.

🧭 Project Tour

Area Description
app/ Application core – services, UI layers, background workers, and FastAPI server.
assets/ Whisper models and supporting binaries land here.
storage/ Your curated lecture library lives here with raw uploads and processed exports.
cli/ Cross-platform helpers, including the optional GPU-enabled Windows binary.
tests/ Pytest suite with lightweight doubles for rapid validation.

🎛️ Interface Personalisation

Visit Settings → Appearance in the web UI to tailor the ambience:

  • Theme: Follow your system palette or opt for Light/Dark.
  • Language: Choose English (en), 中文 (zh), Español (es), or Français (fr). Preferences persist in storage/settings.json and sync automatically across sessions.
  • Whisper defaults: Pre-select your transcription model, compute type, and beam size; GPU options unlock once verified.
  • Slide rendering: Dial in DPI quality from lightning-fast 150 to exquisite 600.

🛠️ Core Workflows

Ingest a lecture

python run.py ingest \
  --class-name "Computer Science" \
  --module-name "Algorithms" \
  --lecture-name "Sorting" \
  --audio path/to/lecture.wav \
  --slides path/to/slides.pdf

The CLI stores originals under storage/<class>/<module>/<lecture>/raw while transcripts and slides enter the processed/ suites. Metadata is tracked in SQLite for instant retrieval by the UI.

GPU-accelerated Whisper (optional indulgence)

  1. Create assets/models/ if absent.
  2. Download ggml-medium.en.bin from ggerganov/whisper.cpp.
  3. Set --whisper-model GPU during ingestion or switch to GPU in the web Settings once the support probe passes.
  4. Trigger Test support from the settings pane to unlock the GPU option globally.

During transcription, Lecture Tools automatically benchmarks GPU availability and gracefully falls back to CPU pipelines with live progress indicators when acceleration is unavailable.


🧪 Quality Suite

pytest

The test harness relies on lightweight doubles, so it runs swiftly without needing to download ML models.


🤝 Contributing

Pull requests are welcome! Please ensure code is formatted, tests are green, and any UI additions respect the project’s modern aesthetic. For feature proposals or feedback, open an issue and let’s craft the next premium experience together.


Crafted with care for educators, researchers, and knowledge artisans everywhere.

About

Full workflow for making a lecture (audio recording + slideshow pdf) into text and images

Resources

License

Stars

Watchers

Forks

Packages

No packages published