Test Implementation of Boosting Latent Diffusion with Flow Matching Please raise issues if you have ideas on how to improve the code, I tried to follow the paper as best as I could but the noise concatenation step still confuses me since it means the tensors will mismatch and thus we will no longer be able to perform flow matching. I solved this by just adding the noise to the source tensor instead of concatenating, but I suspect this may cause the code to perform not quite as good as mixing noise directly with a specialized UNet.
Reference Implementation by Bene Arnthof. Please raise issues and pull requests I will wrap this code in a nice package in the coming days. For now enjoy this minimal example that trains on CelebA. Tested on a 40GB A100, very slow on one card with batch size of 8.
Johannes S. Fischer* · Ming Gui* · Pingchuan Ma* · Nick Stracke · Stefan A. Baumann · Björn Ommer
CompVis Group, LMU Munich
* denotes equal contribution
Recently, there has been tremendous progress in visual synthesis and the underlying generative models.
Here, diffusion models (DMs) stand out particularly, but lately, flow matching (FM) has also garnered
considerable interest. While DMs excel in providing diverse images, they suffer from long training and
slow generation. With latent diffusion, these issues are only partially alleviated. Conversely, FM offers
faster training and inference but exhibits less diversity in synthesis. We demonstrate that introducing FM between the Diffusion model and the convolutional decoder in Latent Diffusion models offers high-resolution image synthesis with reduced computational cost and model size. Diffusion can then efficiently provide the necessary generation diversity. FM compensates for the lower resolution, mapping the small latent space to a high-dimensional one.
Subsequently, the convolutional decoder of the LDM maps these latents to high-resolution images. By
combining the diversity of DMs, the efficiency of FMs, and the effectiveness of convolutional decoders, we
achieve state-of-the-art high-resolution image synthesis at
Samples synthesized in
Super-resolution samples from the LHQ dataset. Left: low-resolution ground truth image bi-linearly up-sampled. Right: high resolution image up-sampled in latent space with our CFM model.
Up-sampling results with resolution