Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CodeCamp2023-240]Adding support for Consistency Models #2045

Closed
wants to merge 96 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
96 commits
Select commit Hold shift + click to select a range
feac9ba
1st
xiaomile Jun 2, 2023
b100239
debug
xiaomile Jun 21, 2023
b080671
20230710 调整
xiaomile Jul 10, 2023
bda8007
调整代码,整合模型,避免editors import 过多class
xiaomile Jul 17, 2023
e990639
调整代码,整合模型,避免editors import 过多class
xiaomile Jul 17, 2023
5f055e9
Merge branch 'main' of https://github.com/xiaomile/mmagic
xiaomile Jul 24, 2023
02a0619
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
cbbea41
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
e836c69
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
0be4fde
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
cafc451
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
0a9e274
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
61aadee
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
80bf698
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
33c7751
支持 DeblurGANv2 inference
xiaomile Jul 25, 2023
2b0bbf2
支持 DeblurGANv2 inference
xiaomile Jul 25, 2023
ee28acf
支持 DeblurGANv2 inference
xiaomile Jul 25, 2023
eab4f43
支持 DeblurGANv2 inference
xiaomile Jul 25, 2023
6623669
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
1e91d3f
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
25d133d
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
b4ae7b8
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
f828049
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
320ca05
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
b933ba9
支持 DeblurGANv2 inference
xiaomile Jul 27, 2023
b1ac1df
支持 DeblurGANv2 inference
xiaomile Jul 27, 2023
d333d56
支持 DeblurGANv2 inference
xiaomile Jul 27, 2023
dcb5987
Merge branch 'main' into main
xiaomile Jul 27, 2023
76187c3
支持 DeblurGANv2 inference
xiaomile Jul 27, 2023
1d0ced9
Merge branch 'main' into main
zengyh1900 Jul 28, 2023
8a1dadc
Update .gitignore
xiaomile Jul 30, 2023
49a45cc
Update .gitignore
xiaomile Jul 30, 2023
2bdfedf
Update .gitignore
xiaomile Jul 30, 2023
8dbf240
Update .gitignore
xiaomile Jul 30, 2023
420de7e
Update .gitignore
xiaomile Jul 30, 2023
8809347
Update .gitignore
xiaomile Jul 30, 2023
b9fe117
Update .gitignore
xiaomile Jul 30, 2023
3bd19e5
Update configs/deblurganv2/README.md
xiaomile Jul 30, 2023
84f9592
支持 DeblurGANv2 inference
xiaomile Aug 2, 2023
64800e5
Merge branch 'main' into main
xiaomile Aug 2, 2023
b262ca6
支持 DeblurGANv2 inference
xiaomile Aug 2, 2023
d4fb484
Merge branch 'main' of https://github.com/xiaomile/mmagic
xiaomile Aug 2, 2023
52fdc15
支持 DeblurGANv2 inference
xiaomile Aug 2, 2023
6856137
Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py
xiaomile Aug 7, 2023
cb281d6
Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py
xiaomile Aug 7, 2023
1211530
Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py
xiaomile Aug 7, 2023
5de913e
Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py
xiaomile Aug 7, 2023
8bd0803
Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py
xiaomile Aug 7, 2023
f18bb29
Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py
xiaomile Aug 7, 2023
8c97de1
Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py
xiaomile Aug 7, 2023
f21e10f
Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py
xiaomile Aug 7, 2023
98741ee
支持 DeblurGANv2 inference
xiaomile Aug 8, 2023
ce8a9b2
Merge branch 'open-mmlab:main' into main
xiaomile Aug 9, 2023
c2b4666
Merge branch 'open-mmlab:main' into main
xiaomile Aug 30, 2023
0fb88f2
Adding support for FastComposer
xiaomile Aug 30, 2023
4014991
Adding support for FastComposer
xiaomile Aug 31, 2023
21c0ce3
Adding support for FastComposer
xiaomile Aug 31, 2023
67d3bf3
Adding support for FastComposer
xiaomile Aug 31, 2023
7f51930
Adding support for FastComposer
xiaomile Aug 31, 2023
306cc83
Adding support for FastComposer
xiaomile Aug 31, 2023
1b47eae
Adding support for FastComposer
xiaomile Aug 31, 2023
b74e551
Adding support for FastComposer
xiaomile Aug 31, 2023
1ed33d2
Adding support for FastComposer
xiaomile Aug 31, 2023
6d0b8f9
Merge branch 'main' into main
xiaomile Sep 1, 2023
a987beb
Adding support for FastComposer
xiaomile Sep 1, 2023
69499a0
Merge branch 'main' into main
xiaomile Sep 1, 2023
254a71f
Merge branch 'main' of https://github.com/xiaomile/mmagic
xiaomile Sep 1, 2023
12f16e0
Adding support for FastComposer
xiaomile Sep 1, 2023
38b7efa
Merge branch 'main' into main
xiaomile Sep 4, 2023
7c5b905
Merge branch 'main' into main
xiaomile Sep 5, 2023
8901eb1
Adding support for FastComposer
xiaomile Sep 5, 2023
8f9647e
Adding support for FastComposer
xiaomile Sep 5, 2023
f74ef3d
Adding support for FastComposer
xiaomile Sep 6, 2023
c2111da
Adding support for FastComposer
xiaomile Sep 6, 2023
9e3781e
Adding support for FastComposer
xiaomile Sep 6, 2023
a4b3491
Merge branch 'main' into main
xiaomile Sep 7, 2023
f845fdf
Adding support for FastComposer
xiaomile Sep 7, 2023
312c5f0
Adding support for FastComposer
xiaomile Sep 7, 2023
4bbacbb
Adding support for FastComposer
xiaomile Sep 8, 2023
672ad69
Merge branch 'main' of https://github.com/xiaomile/mmagic
xiaomile Sep 25, 2023
b820313
Adding support for Consistency Models
xiaomile Oct 3, 2023
31b3298
Adding support for Consistency Models
xiaomile Oct 3, 2023
f34020e
Update README.md
xiaomile Oct 3, 2023
df005b4
Adding support for Consistency Models
xiaomile Oct 3, 2023
ce9a74d
Adding support for Consistency Models
xiaomile Oct 3, 2023
b372918
Adding support for Consistency Models
xiaomile Oct 3, 2023
2c4c010
Adding support for Consistency Models
xiaomile Oct 3, 2023
864b3aa
Adding support for Consistency Models
xiaomile Oct 7, 2023
a49cb59
Adding support for Consistency Models
xiaomile Oct 7, 2023
d6bfcfc
Merge branch 'main' into main
xiaomile Oct 11, 2023
3260c13
[FIX] Check circle ci memory
xiaomile Oct 13, 2023
55d5003
Merge branch 'main' into main
liuwenran Oct 18, 2023
83b6dfb
Merge branch 'main' into main
liuwenran Oct 19, 2023
2ddab89
Merge branch 'main' into main
xiaomile Oct 20, 2023
588a979
Adding support for Consistency Models
xiaomile Oct 20, 2023
b11376c
Merge branch 'main' into main
xiaomile Dec 12, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 87 additions & 0 deletions configs/consistency_models/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Consistency Models (ICML'2023)

> [Consistency Models](https://arxiv.org/abs/2303.01469)

> **Task**: conditional

<!-- [ALGORITHM] -->

## Abstract

<!-- [ABSTRACT] -->

Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256.

<div align="center">
<img src="https://github.com/xiaomile/mmagic/assets/14927720/1586f0c0-8def-4339-b898-470333a26125" width=800>
</div>

## Pre-trained models

| Model | Dataset | Conditional | Download |
| :-------------------------------------------------------------------------------------------: | :--------: | :---------: | :------: |
| [onestep on ImageNet-64](./consistency_models_8xb256-imagenet1k-onestep-64x64.py) | imagenet1k | yes | - |
| [multistep on ImageNet-64](./consistency_models_8xb256-imagenet1k-multistep-64x64.py) | imagenet1k | yes | - |
| [onestep on LSUN Bedroom-256](./consistency_models_8xb32-LSUN-bedroom-onestep-256x256.py) | LSUN | no | - |
| [multistep on LSUN Bedroom-256](./consistency_models_8xb32-LSUN-bedroom-multistep-256x256.py) | LSUN | no | - |
| [onstep on LSUN Cat-256](./consistency_models_8xb32-LSUN-cat-onestep-256x256.py) | LSUN | no | - |
| [multistep on LSUN Cat-256](./consistency_models_8xb32-LSUN-cat-multistep-256x256.py) | LSUN | no | - |

You can also download checkpoints which is the main models in the paper to local machine and deliver the path to 'model_path' before infer.
Here are the download links for each model checkpoint:

- EDM on ImageNet-64: [edm_imagenet64_ema.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/edm_imagenet64_ema.pt)
- CD on ImageNet-64 with l2 metric: [cd_imagenet64_l2.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_imagenet64_l2.pt)
- CD on ImageNet-64 with LPIPS metric: [cd_imagenet64_lpips.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_imagenet64_lpips.pt)
- CT on ImageNet-64: [ct_imagenet64.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/ct_imagenet64.pt)
- EDM on LSUN Bedroom-256: [edm_bedroom256_ema.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/edm_bedroom256_ema.pt)
- CD on LSUN Bedroom-256 with l2 metric: [cd_bedroom256_l2.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_bedroom256_l2.pt)
- CD on LSUN Bedroom-256 with LPIPS metric: [cd_bedroom256_lpips.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_bedroom256_lpips.pt)
- CT on LSUN Bedroom-256: [ct_bedroom256.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/ct_bedroom256.pt)
- EDM on LSUN Cat-256: [edm_cat256_ema.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/edm_cat256_ema.pt)
- CD on LSUN Cat-256 with l2 metric: [cd_cat256_l2.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_cat256_l2.pt)
- CD on LSUN Cat-256 with LPIPS metric: [cd_cat256_lpips.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_cat256_lpips.pt)
- CT on LSUN Cat-256: [ct_cat256.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/ct_cat256.pt)

## quick start

**Infer**

<details>
<summary>Infer Instructions</summary>

You can use the following commands to infer with the model.

```shell
# onestep
python demo\mmagic_inference_demo.py \
--model-name consistency_models \
--model-config configs/consistency_models/consistency_models_8xb256-imagenet1k-onestep-64x64.py \
--result-out-dir demo_consistency_model.jpg

# multistep
python demo\mmagic_inference_demo.py \
--model-name consistency_models \
--model-config configs/consistency_models/consistency_models_8xb256-imagenet1k-multistep-64x64.py \
--result-out-dir demo_consistency_model.jpg

# conditional
python demo\mmagic_inference_demo.py \
--model-name consistency_models \
--model-config configs/consistency_models/consistency_models_8xb256-imagenet1k-onestep-64x64.py \
--label 145 \
--result-out-dir demo_consistency_model.jpg
```

</details>

# Citation

```bibtex
@article{song2023consistency,
title={Consistency Models},
author={Song, Yang and Dhariwal, Prafulla and Chen, Mark and Sutskever, Ilya},
journal={arXiv preprint arXiv:2303.01469},
year={2023},
}
```
88 changes: 88 additions & 0 deletions configs/consistency_models/README_zh-CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Consistency Models (ICML'2023)

> [Consistency Models](https://arxiv.org/abs/2303.01469)

> **任务**: 条件生成

<!-- [ALGORITHM] -->

## 摘要

<!-- [ABSTRACT] -->

扩散模型在图像、音频和视频生成领域取得了显著的进展,但它们依赖于迭代采样过程,导致生成速度较慢。为了克服这个限制,我们提出了一种新的模型家族——一致性模型,通过直接将噪声映射到数据来生成高质量的样本。它们通过设计支持快速的单步生成,同时仍然允许多步采样以在计算和样本质量之间进行权衡。它们还支持零样本数据编辑,如图像修补、上色和超分辨率,而不需要在这些任务上进行显式训练。一致性模型可以通过蒸馏预训练的扩散模型或作为独立的生成模型进行训练。通过大量实验证明,它们在单步和少步采样方面优于现有的扩散模型蒸馏技术,实现了 CIFAR-10 上的新的最先进 FID(Fréchet Inception Distance)为 3.55,ImageNet 64x64 上为 6.20 的结果。当独立训练时,一致性模型成为一种新的生成模型家族,在 CIFAR-10、ImageNet 64x64 和 LSUN 256x256 等标准基准测试上可以优于现有的单步非对抗性生成模型。

<div align="center">
<img src="https://github.com/xiaomile/mmagic/assets/14927720/1586f0c0-8def-4339-b898-470333a26125" width=800>
</div>

## 预训练模型

| Model | Dataset | Conditional | Download |
| :-------------------------------------------------------------------------------------------: | :--------: | :---------: | :------: |
| [onestep on ImageNet-64](./consistency_models_8xb256-imagenet1k-onestep-64x64.py) | imagenet1k | yes | - |
| [multistep on ImageNet-64](./consistency_models_8xb256-imagenet1k-multistep-64x64.py) | imagenet1k | yes | - |
| [onestep on LSUN Bedroom-256](./consistency_models_8xb32-LSUN-bedroom-onestep-256x256.py) | LSUN | no | - |
| [multistep on LSUN Bedroom-256](./consistency_models_8xb32-LSUN-bedroom-multistep-256x256.py) | LSUN | no | - |
| [onstep on LSUN Cat-256](./consistency_models_8xb32-LSUN-cat-onestep-256x256.py) | LSUN | no | - |
| [multistep on LSUN Cat-256](./consistency_models_8xb32-LSUN-cat-multistep-256x256.py) | LSUN | no | - |

你也可以在进行推理前先把论文中主要模型的权重下载到本地的机器上并将权重路径传给'model_path'。
以下是每个模型权重的下载链接:

- EDM on ImageNet-64: [edm_imagenet64_ema.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/edm_imagenet64_ema.pt)
- CD on ImageNet-64 with l2 metric: [cd_imagenet64_l2.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_imagenet64_l2.pt)
- CD on ImageNet-64 with LPIPS metric: [cd_imagenet64_lpips.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_imagenet64_lpips.pt)
- CT on ImageNet-64: [ct_imagenet64.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/ct_imagenet64.pt)
- EDM on LSUN Bedroom-256: [edm_bedroom256_ema.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/edm_bedroom256_ema.pt)
- CD on LSUN Bedroom-256 with l2 metric: [cd_bedroom256_l2.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_bedroom256_l2.pt)
- CD on LSUN Bedroom-256 with LPIPS metric: [cd_bedroom256_lpips.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_bedroom256_lpips.pt)
- CT on LSUN Bedroom-256: [ct_bedroom256.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/ct_bedroom256.pt)
- EDM on LSUN Cat-256: [edm_cat256_ema.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/edm_cat256_ema.pt)
- CD on LSUN Cat-256 with l2 metric: [cd_cat256_l2.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_cat256_l2.pt)
- CD on LSUN Cat-256 with LPIPS metric: [cd_cat256_lpips.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/cd_cat256_lpips.pt)
- CT on LSUN Cat-256: [ct_cat256.pt](https://download.openxlab.org.cn/models/xiaomile/consistency_models/weight/ct_cat256.pt)
-

## 快速开始

**推理**

<details>
<summary>推理说明</summary>

您可以使用以下命令来使用该模型进行推理:

```shell
# 一步生成
python demo\mmagic_inference_demo.py \
--model-name consistency_models \
--model-config configs/consistency_models/consistency_models_8xb256-imagenet1k-onestep-64x64.py \
--result-out-dir demo_consistency_model.jpg

# 多步生成
python demo\mmagic_inference_demo.py \
--model-name consistency_models \
--model-config configs/consistency_models/consistency_models_8xb256-imagenet1k-multistep-64x64.py \
--result-out-dir demo_consistency_model.jpg

# 条件控制生成
python demo\mmagic_inference_demo.py \
--model-name consistency_models \
--model-config configs/consistency_models/consistency_models_8xb256-imagenet1k-onestep-64x64.py \
--label 145 \
--result-out-dir demo_consistency_model.jpg
```

</details>

# Citation

```bibtex
@article{song2023consistency,
title={Consistency Models},
author={Song, Yang and Dhariwal, Prafulla and Chen, Mark and Sutskever, Ilya},
journal={arXiv preprint arXiv:2303.01469},
year={2023},
}
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Copyright (c) OpenMMLab. All rights reserved.
_base_ = ['../_base_/default_runtime.py']

denoiser_config = dict(
type='KarrasDenoiser',
sigma_data=0.5,
sigma_max=80.0,
sigma_min=0.002,
weight_schedule='uniform',
)

unet_config = dict(
type='ConsistencyUNetModel',
in_channels=3,
model_channels=192,
num_res_blocks=3,
dropout=0.0,
channel_mult='',
use_checkpoint=False,
use_fp16=False,
num_head_channels=64,
num_heads=4,
num_heads_upsample=-1,
resblock_updown=True,
use_new_attention_order=False,
use_scale_shift_norm=True)

model = dict(
type='ConsistencyModel',
unet=unet_config,
denoiser=denoiser_config,
attention_resolutions='32,16,8',
batch_size=4,
class_cond=True,
generator='determ',
image_size=64,
learn_sigma=False,
model_path='https://download.openxlab.org.cn/models/xiaomile/'
'consistency_models/weight/cd_imagenet64_l2.pt',
num_classes=1000,
sampler='multistep',
seed=42,
training_mode='consistency_distillation',
ts='0,22,39',
data_preprocessor=dict(
type='DataPreprocessor', mean=[127.5] * 3, std=[127.5] * 3))
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Copyright (c) OpenMMLab. All rights reserved.
_base_ = ['../_base_/default_runtime.py']

denoiser_config = dict(
type='KarrasDenoiser',
sigma_data=0.5,
sigma_max=80.0,
sigma_min=0.002,
weight_schedule='uniform',
)

unet_config = dict(
type='ConsistencyUNetModel',
in_channels=3,
model_channels=192,
num_res_blocks=3,
dropout=0.0,
channel_mult='',
use_checkpoint=False,
use_fp16=False,
num_head_channels=64,
num_heads=4,
num_heads_upsample=-1,
resblock_updown=True,
use_new_attention_order=False,
use_scale_shift_norm=True)

model = dict(
type='ConsistencyModel',
unet=unet_config,
denoiser=denoiser_config,
attention_resolutions='32,16,8',
batch_size=4,
class_cond=True,
generator='determ',
image_size=64,
learn_sigma=False,
model_path='https://download.openxlab.org.cn/models/xiaomile/'
'consistency_models/weight/cd_imagenet64_l2.pt',
num_classes=1000,
sampler='onestep',
seed=42,
training_mode='consistency_distillation',
ts='',
data_preprocessor=dict(
type='DataPreprocessor', mean=[127.5] * 3, std=[127.5] * 3))
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Copyright (c) OpenMMLab. All rights reserved.
_base_ = ['../_base_/default_runtime.py']

denoiser_config = dict(
type='KarrasDenoiser',
sigma_data=0.5,
sigma_max=80.0,
sigma_min=0.002,
weight_schedule='uniform',
)

unet_config = dict(
type='ConsistencyUNetModel',
in_channels=3,
model_channels=256,
num_res_blocks=2,
dropout=0.0,
channel_mult='',
use_checkpoint=False,
use_fp16=False,
num_head_channels=64,
num_heads=4,
num_heads_upsample=-1,
resblock_updown=True,
use_new_attention_order=False,
use_scale_shift_norm=False)

model = dict(
type='ConsistencyModel',
unet=unet_config,
denoiser=denoiser_config,
attention_resolutions='32,16,8',
batch_size=4,
class_cond=False,
generator='determ-indiv',
image_size=256,
learn_sigma=False,
model_path='https://download.openxlab.org.cn/models/xiaomile/'
'consistency_models/weight/ct_bedroom256.pt',
num_classes=1000,
sampler='multistep',
seed=42,
training_mode='consistency_distillation',
ts='0,67,150',
steps=151,
data_preprocessor=dict(
type='DataPreprocessor', mean=[127.5] * 3, std=[127.5] * 3))
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Copyright (c) OpenMMLab. All rights reserved.
_base_ = ['../_base_/default_runtime.py']

denoiser_config = dict(
type='KarrasDenoiser',
sigma_data=0.5,
sigma_max=80.0,
sigma_min=0.002,
weight_schedule='uniform',
)

unet_config = dict(
type='ConsistencyUNetModel',
in_channels=3,
model_channels=256,
num_res_blocks=2,
dropout=0.0,
channel_mult='',
use_checkpoint=False,
use_fp16=False,
num_head_channels=64,
num_heads=4,
num_heads_upsample=-1,
resblock_updown=True,
use_new_attention_order=False,
use_scale_shift_norm=False)

model = dict(
type='ConsistencyModel',
unet=unet_config,
denoiser=denoiser_config,
attention_resolutions='32,16,8',
batch_size=4,
class_cond=False,
generator='determ-indiv',
image_size=256,
learn_sigma=False,
model_path='https://download.openxlab.org.cn/models/xiaomile/'
'consistency_models/weight/ct_bedroom256.pt',
num_classes=1000,
sampler='onestep',
seed=42,
training_mode='consistency_distillation',
ts='',
data_preprocessor=dict(
type='DataPreprocessor', mean=[127.5] * 3, std=[127.5] * 3))
Loading
Loading