Skip to content

shauray8/continuity

Repository files navigation

tiny corp logo

Continuity

Continuity is a no-nonsense, blazingly fast framework for running and training diffusion models like LTX-Video and WAN2.1. We’ve crammed it with custom Cutlass kernels that make your GPU sing, optimized specifically for whatever silicon you’re packing. Sick of sluggish inference or training? This is your ticket out.

Features

  • Inference: Hands down one of the fastest open-source inference engines for diffusion models. Slow code can shove it.

  • Custom Kernels: Cutlass kernels built from scratch to max out your GPU. No lazy, generic slop here.

  • Training: Full fine-tuning and LoRA scripts included. Fast inference is pointless if training’s a slog.

  • GPU-Specific Optimizations: We sniff out your GPU and tune everything to its architecture. Your A100 or 3090 isn’t some random toaster—treat it right.

Contributing

Got a bug? A brilliant idea? Send a pull request. Keep your code tight and your commits sane—no one’s got time for a mess.

About

A no-nonsense library for fast inference and training for diffusion models !

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors