Skip to content

MMagic v1.0.0 Release

Compare
Choose a tag to compare
@Z-Fran Z-Fran released this 25 Apr 09:46
· 151 commits to main since this release
4094cb3

We are excited to announce the release of MMagic v1.0.0 that inherits from MMEditing and MMGeneration.

mmagic-log

Since its inception, MMEditing has been the preferred algorithm library for many super-resolution, editing, and generation tasks, helping research teams win more than 10 top international competitions and supporting over 100 GitHub ecosystem projects. After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN.

Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: MMagic (Multimodal Advanced, Generative, and Intelligent Creation).

In MMagic, we have supported 53+ models in multiple tasks such as fine-tuning for stable diffusion, text-to-image, image and video restoration, super-resolution, editing and generation. With excellent training and experiment management support from MMEngine, MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey. With MMagic, experience more magic in generation! Let's open a new era beyond editing together. More than Editing, Unlock the Magic!

Highlights

1. New Models

We support 11 new models in 4 new tasks.

  • Text2Image / Diffusion
    • ControlNet
    • DreamBooth
    • Stable Diffusion
    • Disco Diffusion
    • GLIDE
    • Guided Diffusion
  • 3D-aware Generation
    • EG3D
  • Image Restoration
    • NAFNet
    • Restormer
    • SwinIR
  • Image Colorization
    • InstColorization
mmagic_introduction.mp4

2. Magic Diffusion Model

For the Diffusion Model, we provide the following "magic" :

  • Support image generation based on Stable Diffusion and Disco Diffusion.

  • Support Finetune methods such as Dreambooth and DreamBooth LoRA.

  • Support controllability in text-to-image generation using ControlNet.
    de87f16f-bf6d-4a61-8406-5ecdbb9167b6

  • Support acceleration and optimization strategies based on xFormers to improve training and inference efficiency.

  • Support video generation based on MultiFrame Render.
    MMagic supports the generation of long videos in various styles through ControlNet and MultiFrame Render.
    prompt keywords: a handsome man, silver hair, smiling, play basketball

    caixukun_dancing_begin_fps10_frames_cat.mp4

    prompt keywords: a girl, black hair, white pants, smiling, play basketball

    caixukun_dancing_begin_fps10_frames_girl_boycat.mp4

    prompt keywords: a handsome man

    zhou_woyangni_fps10_frames_resized_cat.mp4
  • Support calling basic models and sampling strategies through DiffuserWrapper.

  • SAM + MMagic = Generate Anything!
    SAM (Segment Anything Model) is a popular model these days and can also provide more support for MMagic! If you want to create your own animation, you can go to OpenMMLab PlayGround.

    huangbo_fps10_playground_party_fixloc_cat.mp4

3. Upgraded Framework

To improve your "spellcasting" efficiency, we have made the following adjustments to the "magic circuit":

  • By using MMEngine and MMCV of OpenMMLab 2.0 framework, We decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs.
  • Support for 33+ algorithms accelerated by Pytorch 2.0.
  • Refactor DataSample to support the combination and splitting of batch dimensions.
  • Refactor DataPreprocessor and unify the data format for various tasks during training and inference.
  • Refactor MultiValLoop and MultiTestLoop, supporting the evaluation of both generation-type metrics (e.g. FID) and reconstruction-type metrics (e.g. SSIM), and supporting the evaluation of multiple datasets at once.
  • Support visualization on local files or using tensorboard and wandb.

New Features & Improvements

  • Support 53+ algorithms, 232+ configs, 213+ checkpoints, 26+ loss functions, and 20+ metrics.
  • Support controlnet animation and Gradio gui. Click to view.
  • Support Inferencer and Demo using High-level Inference APIs. Click to view.
  • Support Gradio gui of Inpainting inference. Click to view.
  • Support qualitative comparison tools. Click to view.
  • Enable projects. Click to view.
  • Improve converters scripts and documents for datasets. Click to view.