Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How are the models in data.zip made? #7

Open
treeform opened this issue Jan 8, 2023 · 8 comments
Open

How are the models in data.zip made? #7

treeform opened this issue Jan 8, 2023 · 8 comments

Comments

@treeform
Copy link

treeform commented Jan 8, 2023

Thank you for making this repo its very educational. This minimal implementation is brilliant. The bigger SD repos are very hard to understand.

Did you have a script to convert them for official models like this one: https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt to the format you use in this repo?

Or are you using a model from some other source?

Are you using SD 1.5 model?

How hard is it to make this repo to use models trained by others? Like Inkpunk for example? https://huggingface.co/Envvi/Inkpunk-Diffusion/blob/main/Inkpunk-Diffusion-v2.ckpt

@kjsman
Copy link
Owner

kjsman commented Jan 12, 2023

Hello!

In short: the conversion scripts exists, but they are pure spaghetti and unpolished, so I don't want to publish them. (If you need anyway, I can email you) Recently I'm working on another project, so I cannot guarentee when I'll polish and publish them...

data.zip is converted from the official SDv1.4 model.

The conversion scripts are incompatible with SDv2 model and its variants. In fact, this repository is incompatible with SDv2 and its variants.

I believe this repository can be easily edited for compatibility with SDv2.0; v1.4 and v2.0 are only different in hyperparameters (which are hardcoded and can be easily changed) and CLIP last-layer-skipping behavior (which can be easily implemented).

As long as you're familiar with PyTorch and willing to tackle these problems, I think it is fairly easy to use v2.0 model on this codebase.

@treeform
Copy link
Author

treeform commented Jan 12, 2023

I would love to have them even if they are unpolished. You can email them to me treeform a-t istrolid.com .
I find your repo to be the easiest to understand and cleanest implementation of all the other ones I have looked at.
Thanks!

@treeform
Copy link
Author

Opps I think I did my email wrong treeform a-t istrolid.com is the correct one.

@vgoklani
Copy link

I'd like to see the conversion scripts too, and I'm offering to help clean them up! Could you please share the link. thanks!

@treeform
Copy link
Author

I got it working by looking at your weights and SD 1.4 weights and matching data with the names. Now any 1.4 or 1.5 model works (including custom models). My script does some thing a little different (I have switched to just loading using safetensors without conversion) but the main part looks like this, someone could clean it up:

s = torch.load(inputFile, weights_only = False)["state_dict"]

new = {}
new['diffusion'] = {}
new['encoder'] = {}
new['decoder'] = {}
new['clip'] = {}

new['diffusion']['time_embedding.linear_1.weight'] = s['model.diffusion_model.time_embed.0.weight']
new['diffusion']['time_embedding.linear_1.bias'] = s['model.diffusion_model.time_embed.0.bias']
new['diffusion']['time_embedding.linear_2.weight'] = s['model.diffusion_model.time_embed.2.weight']
new['diffusion']['time_embedding.linear_2.bias'] = s['model.diffusion_model.time_embed.2.bias']
new['diffusion']['unet.encoders.0.0.weight'] = s['model.diffusion_model.input_blocks.0.0.weight']
new['diffusion']['unet.encoders.0.0.bias'] = s['model.diffusion_model.input_blocks.0.0.bias']
new['diffusion']['unet.encoders.1.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.1.0.in_layers.0.weight']
new['diffusion']['unet.encoders.1.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.1.0.in_layers.0.bias']
new['diffusion']['unet.encoders.1.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.1.0.in_layers.2.weight']
new['diffusion']['unet.encoders.1.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.1.0.in_layers.2.bias']
new['diffusion']['unet.encoders.1.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.1.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.1.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.1.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.1.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.1.0.out_layers.0.weight']
new['diffusion']['unet.encoders.1.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.1.0.out_layers.0.bias']
new['diffusion']['unet.encoders.1.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.1.0.out_layers.3.weight']
new['diffusion']['unet.encoders.1.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.1.0.out_layers.3.bias']
new['diffusion']['unet.encoders.1.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.1.1.norm.weight']
new['diffusion']['unet.encoders.1.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.1.1.norm.bias']
new['diffusion']['unet.encoders.1.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.1.1.proj_in.weight']
new['diffusion']['unet.encoders.1.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.1.1.proj_in.bias']
new['diffusion']['unet.encoders.1.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.1.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.1.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.1.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.1.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.1.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.1.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.1.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.1.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.1.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.1.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.1.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.1.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.1.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.1.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.1.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.1.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.1.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.1.1.proj_out.weight']
new['diffusion']['unet.encoders.1.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.1.1.proj_out.bias']
new['diffusion']['unet.encoders.2.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.2.0.in_layers.0.weight']
new['diffusion']['unet.encoders.2.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.2.0.in_layers.0.bias']
new['diffusion']['unet.encoders.2.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.2.0.in_layers.2.weight']
new['diffusion']['unet.encoders.2.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.2.0.in_layers.2.bias']
new['diffusion']['unet.encoders.2.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.2.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.2.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.2.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.2.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.2.0.out_layers.0.weight']
new['diffusion']['unet.encoders.2.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.2.0.out_layers.0.bias']
new['diffusion']['unet.encoders.2.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.2.0.out_layers.3.weight']
new['diffusion']['unet.encoders.2.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.2.0.out_layers.3.bias']
new['diffusion']['unet.encoders.2.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.2.1.norm.weight']
new['diffusion']['unet.encoders.2.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.2.1.norm.bias']
new['diffusion']['unet.encoders.2.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.2.1.proj_in.weight']
new['diffusion']['unet.encoders.2.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.2.1.proj_in.bias']
new['diffusion']['unet.encoders.2.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.2.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.2.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.2.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.2.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.2.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.2.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.2.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.2.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.2.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.2.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.2.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.2.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.2.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.2.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.2.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.2.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.2.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.2.1.proj_out.weight']
new['diffusion']['unet.encoders.2.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.2.1.proj_out.bias']
new['diffusion']['unet.encoders.3.0.weight'] = s['model.diffusion_model.input_blocks.3.0.op.weight']
new['diffusion']['unet.encoders.3.0.bias'] = s['model.diffusion_model.input_blocks.3.0.op.bias']
new['diffusion']['unet.encoders.4.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.4.0.in_layers.0.weight']
new['diffusion']['unet.encoders.4.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.4.0.in_layers.0.bias']
new['diffusion']['unet.encoders.4.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.4.0.in_layers.2.weight']
new['diffusion']['unet.encoders.4.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.4.0.in_layers.2.bias']
new['diffusion']['unet.encoders.4.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.4.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.4.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.4.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.4.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.4.0.out_layers.0.weight']
new['diffusion']['unet.encoders.4.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.4.0.out_layers.0.bias']
new['diffusion']['unet.encoders.4.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.4.0.out_layers.3.weight']
new['diffusion']['unet.encoders.4.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.4.0.out_layers.3.bias']
new['diffusion']['unet.encoders.4.0.residual_layer.weight'] = s['model.diffusion_model.input_blocks.4.0.skip_connection.weight']
new['diffusion']['unet.encoders.4.0.residual_layer.bias'] = s['model.diffusion_model.input_blocks.4.0.skip_connection.bias']
new['diffusion']['unet.encoders.4.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.4.1.norm.weight']
new['diffusion']['unet.encoders.4.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.4.1.norm.bias']
new['diffusion']['unet.encoders.4.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.4.1.proj_in.weight']
new['diffusion']['unet.encoders.4.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.4.1.proj_in.bias']
new['diffusion']['unet.encoders.4.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.4.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.4.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.4.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.4.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.4.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.4.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.4.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.4.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.4.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.4.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.4.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.4.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.4.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.4.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.4.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.4.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.4.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.4.1.proj_out.weight']
new['diffusion']['unet.encoders.4.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.4.1.proj_out.bias']
new['diffusion']['unet.encoders.5.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.5.0.in_layers.0.weight']
new['diffusion']['unet.encoders.5.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.5.0.in_layers.0.bias']
new['diffusion']['unet.encoders.5.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.5.0.in_layers.2.weight']
new['diffusion']['unet.encoders.5.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.5.0.in_layers.2.bias']
new['diffusion']['unet.encoders.5.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.5.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.5.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.5.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.5.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.5.0.out_layers.0.weight']
new['diffusion']['unet.encoders.5.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.5.0.out_layers.0.bias']
new['diffusion']['unet.encoders.5.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.5.0.out_layers.3.weight']
new['diffusion']['unet.encoders.5.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.5.0.out_layers.3.bias']
new['diffusion']['unet.encoders.5.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.5.1.norm.weight']
new['diffusion']['unet.encoders.5.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.5.1.norm.bias']
new['diffusion']['unet.encoders.5.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.5.1.proj_in.weight']
new['diffusion']['unet.encoders.5.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.5.1.proj_in.bias']
new['diffusion']['unet.encoders.5.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.5.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.5.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.5.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.5.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.5.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.5.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.5.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.5.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.5.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.5.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.5.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.5.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.5.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.5.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.5.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.5.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.5.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.5.1.proj_out.weight']
new['diffusion']['unet.encoders.5.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.5.1.proj_out.bias']
new['diffusion']['unet.encoders.6.0.weight'] = s['model.diffusion_model.input_blocks.6.0.op.weight']
new['diffusion']['unet.encoders.6.0.bias'] = s['model.diffusion_model.input_blocks.6.0.op.bias']
new['diffusion']['unet.encoders.7.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.7.0.in_layers.0.weight']
new['diffusion']['unet.encoders.7.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.7.0.in_layers.0.bias']
new['diffusion']['unet.encoders.7.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.7.0.in_layers.2.weight']
new['diffusion']['unet.encoders.7.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.7.0.in_layers.2.bias']
new['diffusion']['unet.encoders.7.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.7.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.7.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.7.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.7.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.7.0.out_layers.0.weight']
new['diffusion']['unet.encoders.7.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.7.0.out_layers.0.bias']
new['diffusion']['unet.encoders.7.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.7.0.out_layers.3.weight']
new['diffusion']['unet.encoders.7.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.7.0.out_layers.3.bias']
new['diffusion']['unet.encoders.7.0.residual_layer.weight'] = s['model.diffusion_model.input_blocks.7.0.skip_connection.weight']
new['diffusion']['unet.encoders.7.0.residual_layer.bias'] = s['model.diffusion_model.input_blocks.7.0.skip_connection.bias']
new['diffusion']['unet.encoders.7.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.7.1.norm.weight']
new['diffusion']['unet.encoders.7.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.7.1.norm.bias']
new['diffusion']['unet.encoders.7.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.7.1.proj_in.weight']
new['diffusion']['unet.encoders.7.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.7.1.proj_in.bias']
new['diffusion']['unet.encoders.7.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.7.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.7.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.7.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.7.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.7.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.7.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.7.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.7.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.7.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.7.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.7.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.7.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.7.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.7.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.7.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.7.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.7.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.7.1.proj_out.weight']
new['diffusion']['unet.encoders.7.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.7.1.proj_out.bias']
new['diffusion']['unet.encoders.8.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.8.0.in_layers.0.weight']
new['diffusion']['unet.encoders.8.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.8.0.in_layers.0.bias']
new['diffusion']['unet.encoders.8.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.8.0.in_layers.2.weight']
new['diffusion']['unet.encoders.8.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.8.0.in_layers.2.bias']
new['diffusion']['unet.encoders.8.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.8.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.8.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.8.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.8.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.8.0.out_layers.0.weight']
new['diffusion']['unet.encoders.8.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.8.0.out_layers.0.bias']
new['diffusion']['unet.encoders.8.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.8.0.out_layers.3.weight']
new['diffusion']['unet.encoders.8.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.8.0.out_layers.3.bias']
new['diffusion']['unet.encoders.8.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.8.1.norm.weight']
new['diffusion']['unet.encoders.8.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.8.1.norm.bias']
new['diffusion']['unet.encoders.8.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.8.1.proj_in.weight']
new['diffusion']['unet.encoders.8.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.8.1.proj_in.bias']
new['diffusion']['unet.encoders.8.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.8.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.8.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.8.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.8.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.8.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.8.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.8.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.8.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.8.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.8.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.8.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.8.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.8.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.8.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.8.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.8.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.8.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.8.1.proj_out.weight']
new['diffusion']['unet.encoders.8.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.8.1.proj_out.bias']
new['diffusion']['unet.encoders.9.0.weight'] = s['model.diffusion_model.input_blocks.9.0.op.weight']
new['diffusion']['unet.encoders.9.0.bias'] = s['model.diffusion_model.input_blocks.9.0.op.bias']
new['diffusion']['unet.encoders.10.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.10.0.in_layers.0.weight']
new['diffusion']['unet.encoders.10.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.10.0.in_layers.0.bias']
new['diffusion']['unet.encoders.10.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.10.0.in_layers.2.weight']
new['diffusion']['unet.encoders.10.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.10.0.in_layers.2.bias']
new['diffusion']['unet.encoders.10.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.10.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.10.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.10.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.10.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.10.0.out_layers.0.weight']
new['diffusion']['unet.encoders.10.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.10.0.out_layers.0.bias']
new['diffusion']['unet.encoders.10.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.10.0.out_layers.3.weight']
new['diffusion']['unet.encoders.10.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.10.0.out_layers.3.bias']
new['diffusion']['unet.encoders.11.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.11.0.in_layers.0.weight']
new['diffusion']['unet.encoders.11.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.11.0.in_layers.0.bias']
new['diffusion']['unet.encoders.11.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.11.0.in_layers.2.weight']
new['diffusion']['unet.encoders.11.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.11.0.in_layers.2.bias']
new['diffusion']['unet.encoders.11.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.11.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.11.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.11.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.11.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.11.0.out_layers.0.weight']
new['diffusion']['unet.encoders.11.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.11.0.out_layers.0.bias']
new['diffusion']['unet.encoders.11.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.11.0.out_layers.3.weight']
new['diffusion']['unet.encoders.11.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.11.0.out_layers.3.bias']
new['diffusion']['unet.bottleneck.0.groupnorm_feature.weight'] = s['model.diffusion_model.middle_block.0.in_layers.0.weight']
new['diffusion']['unet.bottleneck.0.groupnorm_feature.bias'] = s['model.diffusion_model.middle_block.0.in_layers.0.bias']
new['diffusion']['unet.bottleneck.0.conv_feature.weight'] = s['model.diffusion_model.middle_block.0.in_layers.2.weight']
new['diffusion']['unet.bottleneck.0.conv_feature.bias'] = s['model.diffusion_model.middle_block.0.in_layers.2.bias']
new['diffusion']['unet.bottleneck.0.linear_time.weight'] = s['model.diffusion_model.middle_block.0.emb_layers.1.weight']
new['diffusion']['unet.bottleneck.0.linear_time.bias'] = s['model.diffusion_model.middle_block.0.emb_layers.1.bias']
new['diffusion']['unet.bottleneck.0.groupnorm_merged.weight'] = s['model.diffusion_model.middle_block.0.out_layers.0.weight']
new['diffusion']['unet.bottleneck.0.groupnorm_merged.bias'] = s['model.diffusion_model.middle_block.0.out_layers.0.bias']
new['diffusion']['unet.bottleneck.0.conv_merged.weight'] = s['model.diffusion_model.middle_block.0.out_layers.3.weight']
new['diffusion']['unet.bottleneck.0.conv_merged.bias'] = s['model.diffusion_model.middle_block.0.out_layers.3.bias']
new['diffusion']['unet.bottleneck.1.groupnorm.weight'] = s['model.diffusion_model.middle_block.1.norm.weight']
new['diffusion']['unet.bottleneck.1.groupnorm.bias'] = s['model.diffusion_model.middle_block.1.norm.bias']
new['diffusion']['unet.bottleneck.1.conv_input.weight'] = s['model.diffusion_model.middle_block.1.proj_in.weight']
new['diffusion']['unet.bottleneck.1.conv_input.bias'] = s['model.diffusion_model.middle_block.1.proj_in.bias']
new['diffusion']['unet.bottleneck.1.attention_1.out_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.bottleneck.1.attention_1.out_proj.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.bottleneck.1.linear_geglu_1.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.bottleneck.1.linear_geglu_1.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.bottleneck.1.linear_geglu_2.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.bottleneck.1.linear_geglu_2.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.bottleneck.1.attention_2.q_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.bottleneck.1.attention_2.k_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.bottleneck.1.attention_2.v_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.bottleneck.1.attention_2.out_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.bottleneck.1.attention_2.out_proj.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.bottleneck.1.layernorm_1.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.bottleneck.1.layernorm_1.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.bottleneck.1.layernorm_2.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.bottleneck.1.layernorm_2.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.bottleneck.1.layernorm_3.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.bottleneck.1.layernorm_3.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.bottleneck.1.conv_output.weight'] = s['model.diffusion_model.middle_block.1.proj_out.weight']
new['diffusion']['unet.bottleneck.1.conv_output.bias'] = s['model.diffusion_model.middle_block.1.proj_out.bias']
new['diffusion']['unet.bottleneck.2.groupnorm_feature.weight'] = s['model.diffusion_model.middle_block.2.in_layers.0.weight']
new['diffusion']['unet.bottleneck.2.groupnorm_feature.bias'] = s['model.diffusion_model.middle_block.2.in_layers.0.bias']
new['diffusion']['unet.bottleneck.2.conv_feature.weight'] = s['model.diffusion_model.middle_block.2.in_layers.2.weight']
new['diffusion']['unet.bottleneck.2.conv_feature.bias'] = s['model.diffusion_model.middle_block.2.in_layers.2.bias']
new['diffusion']['unet.bottleneck.2.linear_time.weight'] = s['model.diffusion_model.middle_block.2.emb_layers.1.weight']
new['diffusion']['unet.bottleneck.2.linear_time.bias'] = s['model.diffusion_model.middle_block.2.emb_layers.1.bias']
new['diffusion']['unet.bottleneck.2.groupnorm_merged.weight'] = s['model.diffusion_model.middle_block.2.out_layers.0.weight']
new['diffusion']['unet.bottleneck.2.groupnorm_merged.bias'] = s['model.diffusion_model.middle_block.2.out_layers.0.bias']
new['diffusion']['unet.bottleneck.2.conv_merged.weight'] = s['model.diffusion_model.middle_block.2.out_layers.3.weight']
new['diffusion']['unet.bottleneck.2.conv_merged.bias'] = s['model.diffusion_model.middle_block.2.out_layers.3.bias']
new['diffusion']['unet.decoders.0.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.0.0.in_layers.0.weight']
new['diffusion']['unet.decoders.0.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.0.0.in_layers.0.bias']
new['diffusion']['unet.decoders.0.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.0.0.in_layers.2.weight']
new['diffusion']['unet.decoders.0.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.0.0.in_layers.2.bias']
new['diffusion']['unet.decoders.0.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.0.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.0.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.0.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.0.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.0.0.out_layers.0.weight']
new['diffusion']['unet.decoders.0.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.0.0.out_layers.0.bias']
new['diffusion']['unet.decoders.0.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.0.0.out_layers.3.weight']
new['diffusion']['unet.decoders.0.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.0.0.out_layers.3.bias']
new['diffusion']['unet.decoders.0.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.0.0.skip_connection.weight']
new['diffusion']['unet.decoders.0.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.0.0.skip_connection.bias']
new['diffusion']['unet.decoders.1.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.1.0.in_layers.0.weight']
new['diffusion']['unet.decoders.1.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.1.0.in_layers.0.bias']
new['diffusion']['unet.decoders.1.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.1.0.in_layers.2.weight']
new['diffusion']['unet.decoders.1.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.1.0.in_layers.2.bias']
new['diffusion']['unet.decoders.1.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.1.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.1.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.1.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.1.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.1.0.out_layers.0.weight']
new['diffusion']['unet.decoders.1.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.1.0.out_layers.0.bias']
new['diffusion']['unet.decoders.1.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.1.0.out_layers.3.weight']
new['diffusion']['unet.decoders.1.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.1.0.out_layers.3.bias']
new['diffusion']['unet.decoders.1.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.1.0.skip_connection.weight']
new['diffusion']['unet.decoders.1.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.1.0.skip_connection.bias']
new['diffusion']['unet.decoders.2.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.2.0.in_layers.0.weight']
new['diffusion']['unet.decoders.2.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.2.0.in_layers.0.bias']
new['diffusion']['unet.decoders.2.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.2.0.in_layers.2.weight']
new['diffusion']['unet.decoders.2.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.2.0.in_layers.2.bias']
new['diffusion']['unet.decoders.2.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.2.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.2.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.2.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.2.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.2.0.out_layers.0.weight']
new['diffusion']['unet.decoders.2.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.2.0.out_layers.0.bias']
new['diffusion']['unet.decoders.2.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.2.0.out_layers.3.weight']
new['diffusion']['unet.decoders.2.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.2.0.out_layers.3.bias']
new['diffusion']['unet.decoders.2.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.2.0.skip_connection.weight']
new['diffusion']['unet.decoders.2.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.2.0.skip_connection.bias']
new['diffusion']['unet.decoders.2.1.conv.weight'] = s['model.diffusion_model.output_blocks.2.1.conv.weight']
new['diffusion']['unet.decoders.2.1.conv.bias'] = s['model.diffusion_model.output_blocks.2.1.conv.bias']
new['diffusion']['unet.decoders.3.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.3.0.in_layers.0.weight']
new['diffusion']['unet.decoders.3.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.3.0.in_layers.0.bias']
new['diffusion']['unet.decoders.3.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.3.0.in_layers.2.weight']
new['diffusion']['unet.decoders.3.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.3.0.in_layers.2.bias']
new['diffusion']['unet.decoders.3.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.3.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.3.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.3.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.3.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.3.0.out_layers.0.weight']
new['diffusion']['unet.decoders.3.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.3.0.out_layers.0.bias']
new['diffusion']['unet.decoders.3.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.3.0.out_layers.3.weight']
new['diffusion']['unet.decoders.3.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.3.0.out_layers.3.bias']
new['diffusion']['unet.decoders.3.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.3.0.skip_connection.weight']
new['diffusion']['unet.decoders.3.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.3.0.skip_connection.bias']
new['diffusion']['unet.decoders.3.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.3.1.norm.weight']
new['diffusion']['unet.decoders.3.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.3.1.norm.bias']
new['diffusion']['unet.decoders.3.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.3.1.proj_in.weight']
new['diffusion']['unet.decoders.3.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.3.1.proj_in.bias']
new['diffusion']['unet.decoders.3.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.3.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.3.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.3.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.3.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.3.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.3.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.3.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.3.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.3.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.3.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.3.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.3.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.3.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.3.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.3.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.3.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.3.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.3.1.proj_out.weight']
new['diffusion']['unet.decoders.3.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.3.1.proj_out.bias']
new['diffusion']['unet.decoders.4.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.4.0.in_layers.0.weight']
new['diffusion']['unet.decoders.4.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.4.0.in_layers.0.bias']
new['diffusion']['unet.decoders.4.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.4.0.in_layers.2.weight']
new['diffusion']['unet.decoders.4.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.4.0.in_layers.2.bias']
new['diffusion']['unet.decoders.4.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.4.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.4.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.4.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.4.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.4.0.out_layers.0.weight']
new['diffusion']['unet.decoders.4.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.4.0.out_layers.0.bias']
new['diffusion']['unet.decoders.4.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.4.0.out_layers.3.weight']
new['diffusion']['unet.decoders.4.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.4.0.out_layers.3.bias']
new['diffusion']['unet.decoders.4.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.4.0.skip_connection.weight']
new['diffusion']['unet.decoders.4.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.4.0.skip_connection.bias']
new['diffusion']['unet.decoders.4.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.4.1.norm.weight']
new['diffusion']['unet.decoders.4.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.4.1.norm.bias']
new['diffusion']['unet.decoders.4.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.4.1.proj_in.weight']
new['diffusion']['unet.decoders.4.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.4.1.proj_in.bias']
new['diffusion']['unet.decoders.4.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.4.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.4.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.4.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.4.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.4.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.4.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.4.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.4.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.4.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.4.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.4.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.4.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.4.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.4.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.4.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.4.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.4.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.4.1.proj_out.weight']
new['diffusion']['unet.decoders.4.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.4.1.proj_out.bias']
new['diffusion']['unet.decoders.5.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.5.0.in_layers.0.weight']
new['diffusion']['unet.decoders.5.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.5.0.in_layers.0.bias']
new['diffusion']['unet.decoders.5.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.5.0.in_layers.2.weight']
new['diffusion']['unet.decoders.5.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.5.0.in_layers.2.bias']
new['diffusion']['unet.decoders.5.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.5.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.5.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.5.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.5.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.5.0.out_layers.0.weight']
new['diffusion']['unet.decoders.5.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.5.0.out_layers.0.bias']
new['diffusion']['unet.decoders.5.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.5.0.out_layers.3.weight']
new['diffusion']['unet.decoders.5.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.5.0.out_layers.3.bias']
new['diffusion']['unet.decoders.5.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.5.0.skip_connection.weight']
new['diffusion']['unet.decoders.5.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.5.0.skip_connection.bias']
new['diffusion']['unet.decoders.5.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.5.1.norm.weight']
new['diffusion']['unet.decoders.5.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.5.1.norm.bias']
new['diffusion']['unet.decoders.5.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.5.1.proj_in.weight']
new['diffusion']['unet.decoders.5.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.5.1.proj_in.bias']
new['diffusion']['unet.decoders.5.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.5.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.5.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.5.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.5.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.5.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.5.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.5.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.5.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.5.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.5.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.5.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.5.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.5.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.5.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.5.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.5.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.5.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.5.1.proj_out.weight']
new['diffusion']['unet.decoders.5.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.5.1.proj_out.bias']
new['diffusion']['unet.decoders.5.2.conv.weight'] = s['model.diffusion_model.output_blocks.5.2.conv.weight']
new['diffusion']['unet.decoders.5.2.conv.bias'] = s['model.diffusion_model.output_blocks.5.2.conv.bias']
new['diffusion']['unet.decoders.6.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.6.0.in_layers.0.weight']
new['diffusion']['unet.decoders.6.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.6.0.in_layers.0.bias']
new['diffusion']['unet.decoders.6.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.6.0.in_layers.2.weight']
new['diffusion']['unet.decoders.6.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.6.0.in_layers.2.bias']
new['diffusion']['unet.decoders.6.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.6.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.6.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.6.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.6.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.6.0.out_layers.0.weight']
new['diffusion']['unet.decoders.6.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.6.0.out_layers.0.bias']
new['diffusion']['unet.decoders.6.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.6.0.out_layers.3.weight']
new['diffusion']['unet.decoders.6.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.6.0.out_layers.3.bias']
new['diffusion']['unet.decoders.6.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.6.0.skip_connection.weight']
new['diffusion']['unet.decoders.6.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.6.0.skip_connection.bias']
new['diffusion']['unet.decoders.6.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.6.1.norm.weight']
new['diffusion']['unet.decoders.6.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.6.1.norm.bias']
new['diffusion']['unet.decoders.6.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.6.1.proj_in.weight']
new['diffusion']['unet.decoders.6.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.6.1.proj_in.bias']
new['diffusion']['unet.decoders.6.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.6.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.6.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.6.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.6.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.6.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.6.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.6.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.6.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.6.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.6.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.6.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.6.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.6.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.6.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.6.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.6.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.6.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.6.1.proj_out.weight']
new['diffusion']['unet.decoders.6.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.6.1.proj_out.bias']
new['diffusion']['unet.decoders.7.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.7.0.in_layers.0.weight']
new['diffusion']['unet.decoders.7.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.7.0.in_layers.0.bias']
new['diffusion']['unet.decoders.7.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.7.0.in_layers.2.weight']
new['diffusion']['unet.decoders.7.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.7.0.in_layers.2.bias']
new['diffusion']['unet.decoders.7.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.7.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.7.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.7.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.7.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.7.0.out_layers.0.weight']
new['diffusion']['unet.decoders.7.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.7.0.out_layers.0.bias']
new['diffusion']['unet.decoders.7.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.7.0.out_layers.3.weight']
new['diffusion']['unet.decoders.7.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.7.0.out_layers.3.bias']
new['diffusion']['unet.decoders.7.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.7.0.skip_connection.weight']
new['diffusion']['unet.decoders.7.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.7.0.skip_connection.bias']
new['diffusion']['unet.decoders.7.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.7.1.norm.weight']
new['diffusion']['unet.decoders.7.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.7.1.norm.bias']
new['diffusion']['unet.decoders.7.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.7.1.proj_in.weight']
new['diffusion']['unet.decoders.7.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.7.1.proj_in.bias']
new['diffusion']['unet.decoders.7.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.7.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.7.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.7.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.7.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.7.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.7.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.7.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.7.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.7.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.7.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.7.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.7.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.7.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.7.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.7.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.7.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.7.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.7.1.proj_out.weight']
new['diffusion']['unet.decoders.7.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.7.1.proj_out.bias']
new['diffusion']['unet.decoders.8.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.8.0.in_layers.0.weight']
new['diffusion']['unet.decoders.8.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.8.0.in_layers.0.bias']
new['diffusion']['unet.decoders.8.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.8.0.in_layers.2.weight']
new['diffusion']['unet.decoders.8.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.8.0.in_layers.2.bias']
new['diffusion']['unet.decoders.8.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.8.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.8.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.8.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.8.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.8.0.out_layers.0.weight']
new['diffusion']['unet.decoders.8.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.8.0.out_layers.0.bias']
new['diffusion']['unet.decoders.8.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.8.0.out_layers.3.weight']
new['diffusion']['unet.decoders.8.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.8.0.out_layers.3.bias']
new['diffusion']['unet.decoders.8.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.8.0.skip_connection.weight']
new['diffusion']['unet.decoders.8.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.8.0.skip_connection.bias']
new['diffusion']['unet.decoders.8.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.8.1.norm.weight']
new['diffusion']['unet.decoders.8.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.8.1.norm.bias']
new['diffusion']['unet.decoders.8.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.8.1.proj_in.weight']
new['diffusion']['unet.decoders.8.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.8.1.proj_in.bias']
new['diffusion']['unet.decoders.8.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.8.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.8.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.8.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.8.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.8.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.8.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.8.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.8.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.8.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.8.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.8.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.8.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.8.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.8.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.8.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.8.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.8.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.8.1.proj_out.weight']
new['diffusion']['unet.decoders.8.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.8.1.proj_out.bias']
new['diffusion']['unet.decoders.8.2.conv.weight'] = s['model.diffusion_model.output_blocks.8.2.conv.weight']
new['diffusion']['unet.decoders.8.2.conv.bias'] = s['model.diffusion_model.output_blocks.8.2.conv.bias']
new['diffusion']['unet.decoders.9.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.9.0.in_layers.0.weight']
new['diffusion']['unet.decoders.9.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.9.0.in_layers.0.bias']
new['diffusion']['unet.decoders.9.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.9.0.in_layers.2.weight']
new['diffusion']['unet.decoders.9.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.9.0.in_layers.2.bias']
new['diffusion']['unet.decoders.9.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.9.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.9.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.9.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.9.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.9.0.out_layers.0.weight']
new['diffusion']['unet.decoders.9.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.9.0.out_layers.0.bias']
new['diffusion']['unet.decoders.9.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.9.0.out_layers.3.weight']
new['diffusion']['unet.decoders.9.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.9.0.out_layers.3.bias']
new['diffusion']['unet.decoders.9.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.9.0.skip_connection.weight']
new['diffusion']['unet.decoders.9.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.9.0.skip_connection.bias']
new['diffusion']['unet.decoders.9.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.9.1.norm.weight']
new['diffusion']['unet.decoders.9.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.9.1.norm.bias']
new['diffusion']['unet.decoders.9.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.9.1.proj_in.weight']
new['diffusion']['unet.decoders.9.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.9.1.proj_in.bias']
new['diffusion']['unet.decoders.9.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.9.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.9.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.9.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.9.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.9.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.9.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.9.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.9.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.9.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.9.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.9.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.9.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.9.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.9.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.9.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.9.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.9.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.9.1.proj_out.weight']
new['diffusion']['unet.decoders.9.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.9.1.proj_out.bias']
new['diffusion']['unet.decoders.10.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.10.0.in_layers.0.weight']
new['diffusion']['unet.decoders.10.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.10.0.in_layers.0.bias']
new['diffusion']['unet.decoders.10.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.10.0.in_layers.2.weight']
new['diffusion']['unet.decoders.10.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.10.0.in_layers.2.bias']
new['diffusion']['unet.decoders.10.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.10.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.10.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.10.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.10.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.10.0.out_layers.0.weight']
new['diffusion']['unet.decoders.10.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.10.0.out_layers.0.bias']
new['diffusion']['unet.decoders.10.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.10.0.out_layers.3.weight']
new['diffusion']['unet.decoders.10.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.10.0.out_layers.3.bias']
new['diffusion']['unet.decoders.10.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.10.0.skip_connection.weight']
new['diffusion']['unet.decoders.10.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.10.0.skip_connection.bias']
new['diffusion']['unet.decoders.10.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.10.1.norm.weight']
new['diffusion']['unet.decoders.10.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.10.1.norm.bias']
new['diffusion']['unet.decoders.10.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.10.1.proj_in.weight']
new['diffusion']['unet.decoders.10.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.10.1.proj_in.bias']
new['diffusion']['unet.decoders.10.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.10.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.10.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.10.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.10.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.10.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.10.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.10.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.10.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.10.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.10.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.10.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.10.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.10.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.10.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.10.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.10.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.10.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.10.1.proj_out.weight']
new['diffusion']['unet.decoders.10.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.10.1.proj_out.bias']
new['diffusion']['unet.decoders.11.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.11.0.in_layers.0.weight']
new['diffusion']['unet.decoders.11.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.11.0.in_layers.0.bias']
new['diffusion']['unet.decoders.11.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.11.0.in_layers.2.weight']
new['diffusion']['unet.decoders.11.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.11.0.in_layers.2.bias']
new['diffusion']['unet.decoders.11.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.11.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.11.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.11.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.11.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.11.0.out_layers.0.weight']
new['diffusion']['unet.decoders.11.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.11.0.out_layers.0.bias']
new['diffusion']['unet.decoders.11.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.11.0.out_layers.3.weight']
new['diffusion']['unet.decoders.11.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.11.0.out_layers.3.bias']
new['diffusion']['unet.decoders.11.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.11.0.skip_connection.weight']
new['diffusion']['unet.decoders.11.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.11.0.skip_connection.bias']
new['diffusion']['unet.decoders.11.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.11.1.norm.weight']
new['diffusion']['unet.decoders.11.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.11.1.norm.bias']
new['diffusion']['unet.decoders.11.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.11.1.proj_in.weight']
new['diffusion']['unet.decoders.11.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.11.1.proj_in.bias']
new['diffusion']['unet.decoders.11.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.11.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.11.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.11.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.11.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.11.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.11.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.11.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.11.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.11.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.11.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.11.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.11.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.11.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.11.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.11.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.11.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.11.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.11.1.proj_out.weight']
new['diffusion']['unet.decoders.11.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.11.1.proj_out.bias']
new['diffusion']['final.groupnorm.weight'] = s['model.diffusion_model.out.0.weight']
new['diffusion']['final.groupnorm.bias'] = s['model.diffusion_model.out.0.bias']
new['diffusion']['final.conv.weight'] = s['model.diffusion_model.out.2.weight']
new['diffusion']['final.conv.bias'] = s['model.diffusion_model.out.2.bias']
new['encoder']['0.weight'] = s['first_stage_model.encoder.conv_in.weight']
new['encoder']['0.bias'] = s['first_stage_model.encoder.conv_in.bias']
new['encoder']['1.groupnorm_1.weight'] = s['first_stage_model.encoder.down.0.block.0.norm1.weight']
new['encoder']['1.groupnorm_1.bias'] = s['first_stage_model.encoder.down.0.block.0.norm1.bias']
new['encoder']['1.conv_1.weight'] = s['first_stage_model.encoder.down.0.block.0.conv1.weight']
new['encoder']['1.conv_1.bias'] = s['first_stage_model.encoder.down.0.block.0.conv1.bias']
new['encoder']['1.groupnorm_2.weight'] = s['first_stage_model.encoder.down.0.block.0.norm2.weight']
new['encoder']['1.groupnorm_2.bias'] = s['first_stage_model.encoder.down.0.block.0.norm2.bias']
new['encoder']['1.conv_2.weight'] = s['first_stage_model.encoder.down.0.block.0.conv2.weight']
new['encoder']['1.conv_2.bias'] = s['first_stage_model.encoder.down.0.block.0.conv2.bias']
new['encoder']['2.groupnorm_1.weight'] = s['first_stage_model.encoder.down.0.block.1.norm1.weight']
new['encoder']['2.groupnorm_1.bias'] = s['first_stage_model.encoder.down.0.block.1.norm1.bias']
new['encoder']['2.conv_1.weight'] = s['first_stage_model.encoder.down.0.block.1.conv1.weight']
new['encoder']['2.conv_1.bias'] = s['first_stage_model.encoder.down.0.block.1.conv1.bias']
new['encoder']['2.groupnorm_2.weight'] = s['first_stage_model.encoder.down.0.block.1.norm2.weight']
new['encoder']['2.groupnorm_2.bias'] = s['first_stage_model.encoder.down.0.block.1.norm2.bias']
new['encoder']['2.conv_2.weight'] = s['first_stage_model.encoder.down.0.block.1.conv2.weight']
new['encoder']['2.conv_2.bias'] = s['first_stage_model.encoder.down.0.block.1.conv2.bias']
new['encoder']['3.weight'] = s['first_stage_model.encoder.down.0.downsample.conv.weight']
new['encoder']['3.bias'] = s['first_stage_model.encoder.down.0.downsample.conv.bias']
new['encoder']['4.groupnorm_1.weight'] = s['first_stage_model.encoder.down.1.block.0.norm1.weight']
new['encoder']['4.groupnorm_1.bias'] = s['first_stage_model.encoder.down.1.block.0.norm1.bias']
new['encoder']['4.conv_1.weight'] = s['first_stage_model.encoder.down.1.block.0.conv1.weight']
new['encoder']['4.conv_1.bias'] = s['first_stage_model.encoder.down.1.block.0.conv1.bias']
new['encoder']['4.groupnorm_2.weight'] = s['first_stage_model.encoder.down.1.block.0.norm2.weight']
new['encoder']['4.groupnorm_2.bias'] = s['first_stage_model.encoder.down.1.block.0.norm2.bias']
new['encoder']['4.conv_2.weight'] = s['first_stage_model.encoder.down.1.block.0.conv2.weight']
new['encoder']['4.conv_2.bias'] = s['first_stage_model.encoder.down.1.block.0.conv2.bias']
new['encoder']['4.residual_layer.weight'] = s['first_stage_model.encoder.down.1.block.0.nin_shortcut.weight']
new['encoder']['4.residual_layer.bias'] = s['first_stage_model.encoder.down.1.block.0.nin_shortcut.bias']
new['encoder']['5.groupnorm_1.weight'] = s['first_stage_model.encoder.down.1.block.1.norm1.weight']
new['encoder']['5.groupnorm_1.bias'] = s['first_stage_model.encoder.down.1.block.1.norm1.bias']
new['encoder']['5.conv_1.weight'] = s['first_stage_model.encoder.down.1.block.1.conv1.weight']
new['encoder']['5.conv_1.bias'] = s['first_stage_model.encoder.down.1.block.1.conv1.bias']
new['encoder']['5.groupnorm_2.weight'] = s['first_stage_model.encoder.down.1.block.1.norm2.weight']
new['encoder']['5.groupnorm_2.bias'] = s['first_stage_model.encoder.down.1.block.1.norm2.bias']
new['encoder']['5.conv_2.weight'] = s['first_stage_model.encoder.down.1.block.1.conv2.weight']
new['encoder']['5.conv_2.bias'] = s['first_stage_model.encoder.down.1.block.1.conv2.bias']
new['encoder']['6.weight'] = s['first_stage_model.encoder.down.1.downsample.conv.weight']
new['encoder']['6.bias'] = s['first_stage_model.encoder.down.1.downsample.conv.bias']
new['encoder']['7.groupnorm_1.weight'] = s['first_stage_model.encoder.down.2.block.0.norm1.weight']
new['encoder']['7.groupnorm_1.bias'] = s['first_stage_model.encoder.down.2.block.0.norm1.bias']
new['encoder']['7.conv_1.weight'] = s['first_stage_model.encoder.down.2.block.0.conv1.weight']
new['encoder']['7.conv_1.bias'] = s['first_stage_model.encoder.down.2.block.0.conv1.bias']
new['encoder']['7.groupnorm_2.weight'] = s['first_stage_model.encoder.down.2.block.0.norm2.weight']
new['encoder']['7.groupnorm_2.bias'] = s['first_stage_model.encoder.down.2.block.0.norm2.bias']
new['encoder']['7.conv_2.weight'] = s['first_stage_model.encoder.down.2.block.0.conv2.weight']
new['encoder']['7.conv_2.bias'] = s['first_stage_model.encoder.down.2.block.0.conv2.bias']
new['encoder']['7.residual_layer.weight'] = s['first_stage_model.encoder.down.2.block.0.nin_shortcut.weight']
new['encoder']['7.residual_layer.bias'] = s['first_stage_model.encoder.down.2.block.0.nin_shortcut.bias']
new['encoder']['8.groupnorm_1.weight'] = s['first_stage_model.encoder.down.2.block.1.norm1.weight']
new['encoder']['8.groupnorm_1.bias'] = s['first_stage_model.encoder.down.2.block.1.norm1.bias']
new['encoder']['8.conv_1.weight'] = s['first_stage_model.encoder.down.2.block.1.conv1.weight']
new['encoder']['8.conv_1.bias'] = s['first_stage_model.encoder.down.2.block.1.conv1.bias']
new['encoder']['8.groupnorm_2.weight'] = s['first_stage_model.encoder.down.2.block.1.norm2.weight']
new['encoder']['8.groupnorm_2.bias'] = s['first_stage_model.encoder.down.2.block.1.norm2.bias']
new['encoder']['8.conv_2.weight'] = s['first_stage_model.encoder.down.2.block.1.conv2.weight']
new['encoder']['8.conv_2.bias'] = s['first_stage_model.encoder.down.2.block.1.conv2.bias']
new['encoder']['9.weight'] = s['first_stage_model.encoder.down.2.downsample.conv.weight']
new['encoder']['9.bias'] = s['first_stage_model.encoder.down.2.downsample.conv.bias']
new['encoder']['10.groupnorm_1.weight'] = s['first_stage_model.encoder.down.3.block.0.norm1.weight']
new['encoder']['10.groupnorm_1.bias'] = s['first_stage_model.encoder.down.3.block.0.norm1.bias']
new['encoder']['10.conv_1.weight'] = s['first_stage_model.encoder.down.3.block.0.conv1.weight']
new['encoder']['10.conv_1.bias'] = s['first_stage_model.encoder.down.3.block.0.conv1.bias']
new['encoder']['10.groupnorm_2.weight'] = s['first_stage_model.encoder.down.3.block.0.norm2.weight']
new['encoder']['10.groupnorm_2.bias'] = s['first_stage_model.encoder.down.3.block.0.norm2.bias']
new['encoder']['10.conv_2.weight'] = s['first_stage_model.encoder.down.3.block.0.conv2.weight']
new['encoder']['10.conv_2.bias'] = s['first_stage_model.encoder.down.3.block.0.conv2.bias']
new['encoder']['11.groupnorm_1.weight'] = s['first_stage_model.encoder.down.3.block.1.norm1.weight']
new['encoder']['11.groupnorm_1.bias'] = s['first_stage_model.encoder.down.3.block.1.norm1.bias']
new['encoder']['11.conv_1.weight'] = s['first_stage_model.encoder.down.3.block.1.conv1.weight']
new['encoder']['11.conv_1.bias'] = s['first_stage_model.encoder.down.3.block.1.conv1.bias']
new['encoder']['11.groupnorm_2.weight'] = s['first_stage_model.encoder.down.3.block.1.norm2.weight']
new['encoder']['11.groupnorm_2.bias'] = s['first_stage_model.encoder.down.3.block.1.norm2.bias']
new['encoder']['11.conv_2.weight'] = s['first_stage_model.encoder.down.3.block.1.conv2.weight']
new['encoder']['11.conv_2.bias'] = s['first_stage_model.encoder.down.3.block.1.conv2.bias']
new['encoder']['12.groupnorm_1.weight'] = s['first_stage_model.encoder.mid.block_1.norm1.weight']
new['encoder']['12.groupnorm_1.bias'] = s['first_stage_model.encoder.mid.block_1.norm1.bias']
new['encoder']['12.conv_1.weight'] = s['first_stage_model.encoder.mid.block_1.conv1.weight']
new['encoder']['12.conv_1.bias'] = s['first_stage_model.encoder.mid.block_1.conv1.bias']
new['encoder']['12.groupnorm_2.weight'] = s['first_stage_model.encoder.mid.block_1.norm2.weight']
new['encoder']['12.groupnorm_2.bias'] = s['first_stage_model.encoder.mid.block_1.norm2.bias']
new['encoder']['12.conv_2.weight'] = s['first_stage_model.encoder.mid.block_1.conv2.weight']
new['encoder']['12.conv_2.bias'] = s['first_stage_model.encoder.mid.block_1.conv2.bias']
new['encoder']['13.groupnorm.weight'] = s['first_stage_model.encoder.mid.attn_1.norm.weight']
new['encoder']['13.groupnorm.bias'] = s['first_stage_model.encoder.mid.attn_1.norm.bias']
new['encoder']['13.attention.out_proj.bias'] = s['first_stage_model.encoder.mid.attn_1.proj_out.bias']
new['encoder']['14.groupnorm_1.weight'] = s['first_stage_model.encoder.mid.block_2.norm1.weight']
new['encoder']['14.groupnorm_1.bias'] = s['first_stage_model.encoder.mid.block_2.norm1.bias']
new['encoder']['14.conv_1.weight'] = s['first_stage_model.encoder.mid.block_2.conv1.weight']
new['encoder']['14.conv_1.bias'] = s['first_stage_model.encoder.mid.block_2.conv1.bias']
new['encoder']['14.groupnorm_2.weight'] = s['first_stage_model.encoder.mid.block_2.norm2.weight']
new['encoder']['14.groupnorm_2.bias'] = s['first_stage_model.encoder.mid.block_2.norm2.bias']
new['encoder']['14.conv_2.weight'] = s['first_stage_model.encoder.mid.block_2.conv2.weight']
new['encoder']['14.conv_2.bias'] = s['first_stage_model.encoder.mid.block_2.conv2.bias']
new['encoder']['15.weight'] = s['first_stage_model.encoder.norm_out.weight']
new['encoder']['15.bias'] = s['first_stage_model.encoder.norm_out.bias']
new['encoder']['17.weight'] = s['first_stage_model.encoder.conv_out.weight']
new['encoder']['17.bias'] = s['first_stage_model.encoder.conv_out.bias']
new['decoder']['1.weight'] = s['first_stage_model.decoder.conv_in.weight']
new['decoder']['1.bias'] = s['first_stage_model.decoder.conv_in.bias']
new['decoder']['2.groupnorm_1.weight'] = s['first_stage_model.decoder.mid.block_1.norm1.weight']
new['decoder']['2.groupnorm_1.bias'] = s['first_stage_model.decoder.mid.block_1.norm1.bias']
new['decoder']['2.conv_1.weight'] = s['first_stage_model.decoder.mid.block_1.conv1.weight']
new['decoder']['2.conv_1.bias'] = s['first_stage_model.decoder.mid.block_1.conv1.bias']
new['decoder']['2.groupnorm_2.weight'] = s['first_stage_model.decoder.mid.block_1.norm2.weight']
new['decoder']['2.groupnorm_2.bias'] = s['first_stage_model.decoder.mid.block_1.norm2.bias']
new['decoder']['2.conv_2.weight'] = s['first_stage_model.decoder.mid.block_1.conv2.weight']
new['decoder']['2.conv_2.bias'] = s['first_stage_model.decoder.mid.block_1.conv2.bias']
new['decoder']['3.groupnorm.weight'] = s['first_stage_model.decoder.mid.attn_1.norm.weight']
new['decoder']['3.groupnorm.bias'] = s['first_stage_model.decoder.mid.attn_1.norm.bias']
new['decoder']['3.attention.out_proj.bias'] = s['first_stage_model.decoder.mid.attn_1.proj_out.bias']
new['decoder']['4.groupnorm_1.weight'] = s['first_stage_model.decoder.mid.block_2.norm1.weight']
new['decoder']['4.groupnorm_1.bias'] = s['first_stage_model.decoder.mid.block_2.norm1.bias']
new['decoder']['4.conv_1.weight'] = s['first_stage_model.decoder.mid.block_2.conv1.weight']
new['decoder']['4.conv_1.bias'] = s['first_stage_model.decoder.mid.block_2.conv1.bias']
new['decoder']['4.groupnorm_2.weight'] = s['first_stage_model.decoder.mid.block_2.norm2.weight']
new['decoder']['4.groupnorm_2.bias'] = s['first_stage_model.decoder.mid.block_2.norm2.bias']
new['decoder']['4.conv_2.weight'] = s['first_stage_model.decoder.mid.block_2.conv2.weight']
new['decoder']['4.conv_2.bias'] = s['first_stage_model.decoder.mid.block_2.conv2.bias']
new['decoder']['20.groupnorm_1.weight'] = s['first_stage_model.decoder.up.0.block.0.norm1.weight']
new['decoder']['20.groupnorm_1.bias'] = s['first_stage_model.decoder.up.0.block.0.norm1.bias']
new['decoder']['20.conv_1.weight'] = s['first_stage_model.decoder.up.0.block.0.conv1.weight']
new['decoder']['20.conv_1.bias'] = s['first_stage_model.decoder.up.0.block.0.conv1.bias']
new['decoder']['20.groupnorm_2.weight'] = s['first_stage_model.decoder.up.0.block.0.norm2.weight']
new['decoder']['20.groupnorm_2.bias'] = s['first_stage_model.decoder.up.0.block.0.norm2.bias']
new['decoder']['20.conv_2.weight'] = s['first_stage_model.decoder.up.0.block.0.conv2.weight']
new['decoder']['20.conv_2.bias'] = s['first_stage_model.decoder.up.0.block.0.conv2.bias']
new['decoder']['20.residual_layer.weight'] = s['first_stage_model.decoder.up.0.block.0.nin_shortcut.weight']
new['decoder']['20.residual_layer.bias'] = s['first_stage_model.decoder.up.0.block.0.nin_shortcut.bias']
new['decoder']['21.groupnorm_1.weight'] = s['first_stage_model.decoder.up.0.block.1.norm1.weight']
new['decoder']['21.groupnorm_1.bias'] = s['first_stage_model.decoder.up.0.block.1.norm1.bias']
new['decoder']['21.conv_1.weight'] = s['first_stage_model.decoder.up.0.block.1.conv1.weight']
new['decoder']['21.conv_1.bias'] = s['first_stage_model.decoder.up.0.block.1.conv1.bias']
new['decoder']['21.groupnorm_2.weight'] = s['first_stage_model.decoder.up.0.block.1.norm2.weight']
new['decoder']['21.groupnorm_2.bias'] = s['first_stage_model.decoder.up.0.block.1.norm2.bias']
new['decoder']['21.conv_2.weight'] = s['first_stage_model.decoder.up.0.block.1.conv2.weight']
new['decoder']['21.conv_2.bias'] = s['first_stage_model.decoder.up.0.block.1.conv2.bias']
new['decoder']['22.groupnorm_1.weight'] = s['first_stage_model.decoder.up.0.block.2.norm1.weight']
new['decoder']['22.groupnorm_1.bias'] = s['first_stage_model.decoder.up.0.block.2.norm1.bias']
new['decoder']['22.conv_1.weight'] = s['first_stage_model.decoder.up.0.block.2.conv1.weight']
new['decoder']['22.conv_1.bias'] = s['first_stage_model.decoder.up.0.block.2.conv1.bias']
new['decoder']['22.groupnorm_2.weight'] = s['first_stage_model.decoder.up.0.block.2.norm2.weight']
new['decoder']['22.groupnorm_2.bias'] = s['first_stage_model.decoder.up.0.block.2.norm2.bias']
new['decoder']['22.conv_2.weight'] = s['first_stage_model.decoder.up.0.block.2.conv2.weight']
new['decoder']['22.conv_2.bias'] = s['first_stage_model.decoder.up.0.block.2.conv2.bias']
new['decoder']['15.groupnorm_1.weight'] = s['first_stage_model.decoder.up.1.block.0.norm1.weight']
new['decoder']['15.groupnorm_1.bias'] = s['first_stage_model.decoder.up.1.block.0.norm1.bias']
new['decoder']['15.conv_1.weight'] = s['first_stage_model.decoder.up.1.block.0.conv1.weight']
new['decoder']['15.conv_1.bias'] = s['first_stage_model.decoder.up.1.block.0.conv1.bias']
new['decoder']['15.groupnorm_2.weight'] = s['first_stage_model.decoder.up.1.block.0.norm2.weight']
new['decoder']['15.groupnorm_2.bias'] = s['first_stage_model.decoder.up.1.block.0.norm2.bias']
new['decoder']['15.conv_2.weight'] = s['first_stage_model.decoder.up.1.block.0.conv2.weight']
new['decoder']['15.conv_2.bias'] = s['first_stage_model.decoder.up.1.block.0.conv2.bias']
new['decoder']['15.residual_layer.weight'] = s['first_stage_model.decoder.up.1.block.0.nin_shortcut.weight']
new['decoder']['15.residual_layer.bias'] = s['first_stage_model.decoder.up.1.block.0.nin_shortcut.bias']
new['decoder']['16.groupnorm_1.weight'] = s['first_stage_model.decoder.up.1.block.1.norm1.weight']
new['decoder']['16.groupnorm_1.bias'] = s['first_stage_model.decoder.up.1.block.1.norm1.bias']
new['decoder']['16.conv_1.weight'] = s['first_stage_model.decoder.up.1.block.1.conv1.weight']
new['decoder']['16.conv_1.bias'] = s['first_stage_model.decoder.up.1.block.1.conv1.bias']
new['decoder']['16.groupnorm_2.weight'] = s['first_stage_model.decoder.up.1.block.1.norm2.weight']
new['decoder']['16.groupnorm_2.bias'] = s['first_stage_model.decoder.up.1.block.1.norm2.bias']
new['decoder']['16.conv_2.weight'] = s['first_stage_model.decoder.up.1.block.1.conv2.weight']
new['decoder']['16.conv_2.bias'] = s['first_stage_model.decoder.up.1.block.1.conv2.bias']
new['decoder']['17.groupnorm_1.weight'] = s['first_stage_model.decoder.up.1.block.2.norm1.weight']
new['decoder']['17.groupnorm_1.bias'] = s['first_stage_model.decoder.up.1.block.2.norm1.bias']
new['decoder']['17.conv_1.weight'] = s['first_stage_model.decoder.up.1.block.2.conv1.weight']
new['decoder']['17.conv_1.bias'] = s['first_stage_model.decoder.up.1.block.2.conv1.bias']
new['decoder']['17.groupnorm_2.weight'] = s['first_stage_model.decoder.up.1.block.2.norm2.weight']
new['decoder']['17.groupnorm_2.bias'] = s['first_stage_model.decoder.up.1.block.2.norm2.bias']
new['decoder']['17.conv_2.weight'] = s['first_stage_model.decoder.up.1.block.2.conv2.weight']
new['decoder']['17.conv_2.bias'] = s['first_stage_model.decoder.up.1.block.2.conv2.bias']
new['decoder']['19.weight'] = s['first_stage_model.decoder.up.1.upsample.conv.weight']
new['decoder']['19.bias'] = s['first_stage_model.decoder.up.1.upsample.conv.bias']
new['decoder']['10.groupnorm_1.weight'] = s['first_stage_model.decoder.up.2.block.0.norm1.weight']
new['decoder']['10.groupnorm_1.bias'] = s['first_stage_model.decoder.up.2.block.0.norm1.bias']
new['decoder']['10.conv_1.weight'] = s['first_stage_model.decoder.up.2.block.0.conv1.weight']
new['decoder']['10.conv_1.bias'] = s['first_stage_model.decoder.up.2.block.0.conv1.bias']
new['decoder']['10.groupnorm_2.weight'] = s['first_stage_model.decoder.up.2.block.0.norm2.weight']
new['decoder']['10.groupnorm_2.bias'] = s['first_stage_model.decoder.up.2.block.0.norm2.bias']
new['decoder']['10.conv_2.weight'] = s['first_stage_model.decoder.up.2.block.0.conv2.weight']
new['decoder']['10.conv_2.bias'] = s['first_stage_model.decoder.up.2.block.0.conv2.bias']
new['decoder']['11.groupnorm_1.weight'] = s['first_stage_model.decoder.up.2.block.1.norm1.weight']
new['decoder']['11.groupnorm_1.bias'] = s['first_stage_model.decoder.up.2.block.1.norm1.bias']
new['decoder']['11.conv_1.weight'] = s['first_stage_model.decoder.up.2.block.1.conv1.weight']
new['decoder']['11.conv_1.bias'] = s['first_stage_model.decoder.up.2.block.1.conv1.bias']
new['decoder']['11.groupnorm_2.weight'] = s['first_stage_model.decoder.up.2.block.1.norm2.weight']
new['decoder']['11.groupnorm_2.bias'] = s['first_stage_model.decoder.up.2.block.1.norm2.bias']
new['decoder']['11.conv_2.weight'] = s['first_stage_model.decoder.up.2.block.1.conv2.weight']
new['decoder']['11.conv_2.bias'] = s['first_stage_model.decoder.up.2.block.1.conv2.bias']
new['decoder']['12.groupnorm_1.weight'] = s['first_stage_model.decoder.up.2.block.2.norm1.weight']
new['decoder']['12.groupnorm_1.bias'] = s['first_stage_model.decoder.up.2.block.2.norm1.bias']
new['decoder']['12.conv_1.weight'] = s['first_stage_model.decoder.up.2.block.2.conv1.weight']
new['decoder']['12.conv_1.bias'] = s['first_stage_model.decoder.up.2.block.2.conv1.bias']
new['decoder']['12.groupnorm_2.weight'] = s['first_stage_model.decoder.up.2.block.2.norm2.weight']
new['decoder']['12.groupnorm_2.bias'] = s['first_stage_model.decoder.up.2.block.2.norm2.bias']
new['decoder']['12.conv_2.weight'] = s['first_stage_model.decoder.up.2.block.2.conv2.weight']
new['decoder']['12.conv_2.bias'] = s['first_stage_model.decoder.up.2.block.2.conv2.bias']
new['decoder']['14.weight'] = s['first_stage_model.decoder.up.2.upsample.conv.weight']
new['decoder']['14.bias'] = s['first_stage_model.decoder.up.2.upsample.conv.bias']
new['decoder']['5.groupnorm_1.weight'] = s['first_stage_model.decoder.up.3.block.0.norm1.weight']
new['decoder']['5.groupnorm_1.bias'] = s['first_stage_model.decoder.up.3.block.0.norm1.bias']
new['decoder']['5.conv_1.weight'] = s['first_stage_model.decoder.up.3.block.0.conv1.weight']
new['decoder']['5.conv_1.bias'] = s['first_stage_model.decoder.up.3.block.0.conv1.bias']
new['decoder']['5.groupnorm_2.weight'] = s['first_stage_model.decoder.up.3.block.0.norm2.weight']
new['decoder']['5.groupnorm_2.bias'] = s['first_stage_model.decoder.up.3.block.0.norm2.bias']
new['decoder']['5.conv_2.weight'] = s['first_stage_model.decoder.up.3.block.0.conv2.weight']
new['decoder']['5.conv_2.bias'] = s['first_stage_model.decoder.up.3.block.0.conv2.bias']
new['decoder']['6.groupnorm_1.weight'] = s['first_stage_model.decoder.up.3.block.1.norm1.weight']
new['decoder']['6.groupnorm_1.bias'] = s['first_stage_model.decoder.up.3.block.1.norm1.bias']
new['decoder']['6.conv_1.weight'] = s['first_stage_model.decoder.up.3.block.1.conv1.weight']
new['decoder']['6.conv_1.bias'] = s['first_stage_model.decoder.up.3.block.1.conv1.bias']
new['decoder']['6.groupnorm_2.weight'] = s['first_stage_model.decoder.up.3.block.1.norm2.weight']
new['decoder']['6.groupnorm_2.bias'] = s['first_stage_model.decoder.up.3.block.1.norm2.bias']
new['decoder']['6.conv_2.weight'] = s['first_stage_model.decoder.up.3.block.1.conv2.weight']
new['decoder']['6.conv_2.bias'] = s['first_stage_model.decoder.up.3.block.1.conv2.bias']
new['decoder']['7.groupnorm_1.weight'] = s['first_stage_model.decoder.up.3.block.2.norm1.weight']
new['decoder']['7.groupnorm_1.bias'] = s['first_stage_model.decoder.up.3.block.2.norm1.bias']
new['decoder']['7.conv_1.weight'] = s['first_stage_model.decoder.up.3.block.2.conv1.weight']
new['decoder']['7.conv_1.bias'] = s['first_stage_model.decoder.up.3.block.2.conv1.bias']
new['decoder']['7.groupnorm_2.weight'] = s['first_stage_model.decoder.up.3.block.2.norm2.weight']
new['decoder']['7.groupnorm_2.bias'] = s['first_stage_model.decoder.up.3.block.2.norm2.bias']
new['decoder']['7.conv_2.weight'] = s['first_stage_model.decoder.up.3.block.2.conv2.weight']
new['decoder']['7.conv_2.bias'] = s['first_stage_model.decoder.up.3.block.2.conv2.bias']
new['decoder']['9.weight'] = s['first_stage_model.decoder.up.3.upsample.conv.weight']
new['decoder']['9.bias'] = s['first_stage_model.decoder.up.3.upsample.conv.bias']
new['decoder']['23.weight'] = s['first_stage_model.decoder.norm_out.weight']
new['decoder']['23.bias'] = s['first_stage_model.decoder.norm_out.bias']
new['decoder']['25.weight'] = s['first_stage_model.decoder.conv_out.weight']
new['decoder']['25.bias'] = s['first_stage_model.decoder.conv_out.bias']
new['encoder']['18.weight'] = s['first_stage_model.quant_conv.weight']
new['encoder']['18.bias'] = s['first_stage_model.quant_conv.bias']
new['decoder']['0.weight'] = s['first_stage_model.post_quant_conv.weight']
new['decoder']['0.bias'] = s['first_stage_model.post_quant_conv.bias']
new['clip']['embedding.token_embedding.weight'] = s['cond_stage_model.transformer.text_model.embeddings.token_embedding.weight']
new['clip']['embedding.position_value'] = s['cond_stage_model.transformer.text_model.embeddings.position_embedding.weight']
new['clip']['layers.0.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight']
new['clip']['layers.0.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias']
new['clip']['layers.0.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm1.weight']
new['clip']['layers.0.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm1.bias']
new['clip']['layers.0.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc1.weight']
new['clip']['layers.0.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc1.bias']
new['clip']['layers.0.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc2.weight']
new['clip']['layers.0.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc2.bias']
new['clip']['layers.0.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.weight']
new['clip']['layers.0.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.bias']
new['clip']['layers.1.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight']
new['clip']['layers.1.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias']
new['clip']['layers.1.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm1.weight']
new['clip']['layers.1.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm1.bias']
new['clip']['layers.1.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc1.weight']
new['clip']['layers.1.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc1.bias']
new['clip']['layers.1.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc2.weight']
new['clip']['layers.1.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc2.bias']
new['clip']['layers.1.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm2.weight']
new['clip']['layers.1.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm2.bias']
new['clip']['layers.2.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight']
new['clip']['layers.2.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias']
new['clip']['layers.2.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm1.weight']
new['clip']['layers.2.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm1.bias']
new['clip']['layers.2.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc1.weight']
new['clip']['layers.2.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc1.bias']
new['clip']['layers.2.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc2.weight']
new['clip']['layers.2.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc2.bias']
new['clip']['layers.2.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.weight']
new['clip']['layers.2.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.bias']
new['clip']['layers.3.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight']
new['clip']['layers.3.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias']
new['clip']['layers.3.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm1.weight']
new['clip']['layers.3.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm1.bias']
new['clip']['layers.3.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc1.weight']
new['clip']['layers.3.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc1.bias']
new['clip']['layers.3.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc2.weight']
new['clip']['layers.3.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc2.bias']
new['clip']['layers.3.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm2.weight']
new['clip']['layers.3.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm2.bias']
new['clip']['layers.4.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight']
new['clip']['layers.4.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias']
new['clip']['layers.4.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm1.weight']
new['clip']['layers.4.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm1.bias']
new['clip']['layers.4.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc1.weight']
new['clip']['layers.4.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc1.bias']
new['clip']['layers.4.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc2.weight']
new['clip']['layers.4.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc2.bias']
new['clip']['layers.4.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm2.weight']
new['clip']['layers.4.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm2.bias']
new['clip']['layers.5.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight']
new['clip']['layers.5.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias']
new['clip']['layers.5.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm1.weight']
new['clip']['layers.5.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm1.bias']
new['clip']['layers.5.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc1.weight']
new['clip']['layers.5.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc1.bias']
new['clip']['layers.5.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc2.weight']
new['clip']['layers.5.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc2.bias']
new['clip']['layers.5.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm2.weight']
new['clip']['layers.5.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm2.bias']
new['clip']['layers.6.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight']
new['clip']['layers.6.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias']
new['clip']['layers.6.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm1.weight']
new['clip']['layers.6.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm1.bias']
new['clip']['layers.6.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc1.weight']
new['clip']['layers.6.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc1.bias']
new['clip']['layers.6.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc2.weight']
new['clip']['layers.6.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc2.bias']
new['clip']['layers.6.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm2.weight']
new['clip']['layers.6.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm2.bias']
new['clip']['layers.7.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight']
new['clip']['layers.7.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias']
new['clip']['layers.7.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm1.weight']
new['clip']['layers.7.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm1.bias']
new['clip']['layers.7.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc1.weight']
new['clip']['layers.7.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc1.bias']
new['clip']['layers.7.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc2.weight']
new['clip']['layers.7.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc2.bias']
new['clip']['layers.7.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.weight']
new['clip']['layers.7.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.bias']
new['clip']['layers.8.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight']
new['clip']['layers.8.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias']
new['clip']['layers.8.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm1.weight']
new['clip']['layers.8.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm1.bias']
new['clip']['layers.8.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc1.weight']
new['clip']['layers.8.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc1.bias']
new['clip']['layers.8.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc2.weight']
new['clip']['layers.8.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc2.bias']
new['clip']['layers.8.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm2.weight']
new['clip']['layers.8.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm2.bias']
new['clip']['layers.9.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight']
new['clip']['layers.9.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias']
new['clip']['layers.9.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm1.weight']
new['clip']['layers.9.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm1.bias']
new['clip']['layers.9.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc1.weight']
new['clip']['layers.9.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc1.bias']
new['clip']['layers.9.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc2.weight']
new['clip']['layers.9.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc2.bias']
new['clip']['layers.9.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm2.weight']
new['clip']['layers.9.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm2.bias']
new['clip']['layers.10.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight']
new['clip']['layers.10.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias']
new['clip']['layers.10.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm1.weight']
new['clip']['layers.10.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm1.bias']
new['clip']['layers.10.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc1.weight']
new['clip']['layers.10.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc1.bias']
new['clip']['layers.10.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc2.weight']
new['clip']['layers.10.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc2.bias']
new['clip']['layers.10.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm2.weight']
new['clip']['layers.10.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm2.bias']
new['clip']['layers.11.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight']
new['clip']['layers.11.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias']
new['clip']['layers.11.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.weight']
new['clip']['layers.11.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.bias']
new['clip']['layers.11.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.weight']
new['clip']['layers.11.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.bias']
new['clip']['layers.11.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc2.weight']
new['clip']['layers.11.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc2.bias']
new['clip']['layers.11.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm2.weight']
new['clip']['layers.11.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm2.bias']
new['clip']['layernorm.weight'] = s['cond_stage_model.transformer.text_model.final_layer_norm.weight']
new['clip']['layernorm.bias'] = s['cond_stage_model.transformer.text_model.final_layer_norm.bias']
new['diffusion']['unet.encoders.1.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.2.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.4.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.5.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.7.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.8.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.bottleneck.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.3.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.4.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.5.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.6.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.7.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.8.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.9.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.10.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.11.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['encoder']['13.attention.in_proj.weight'] = torch.cat((s['first_stage_model.encoder.mid.attn_1.q.weight'], s['first_stage_model.encoder.mid.attn_1.k.weight'], s['first_stage_model.encoder.mid.attn_1.v.weight']), 0).reshape((1536, 512))
new['encoder']['13.attention.in_proj.bias'] = torch.cat((s['first_stage_model.encoder.mid.attn_1.q.bias'], s['first_stage_model.encoder.mid.attn_1.k.bias'], s['first_stage_model.encoder.mid.attn_1.v.bias']), 0)
new['encoder']['13.attention.out_proj.weight'] = s['first_stage_model.encoder.mid.attn_1.proj_out.weight'].reshape((512, 512))
new['decoder']['3.attention.in_proj.weight'] = torch.cat((s['first_stage_model.decoder.mid.attn_1.q.weight'], s['first_stage_model.decoder.mid.attn_1.k.weight'], s['first_stage_model.decoder.mid.attn_1.v.weight']), 0).reshape((1536, 512))
new['decoder']['3.attention.in_proj.bias'] = torch.cat((s['first_stage_model.decoder.mid.attn_1.q.bias'], s['first_stage_model.decoder.mid.attn_1.k.bias'], s['first_stage_model.decoder.mid.attn_1.v.bias']), 0)
new['decoder']['3.attention.out_proj.weight'] = s['first_stage_model.decoder.mid.attn_1.proj_out.weight'].reshape((512, 512))
new['clip']['layers.0.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight']), 0)
new['clip']['layers.0.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias']), 0)
new['clip']['layers.1.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight']), 0)
new['clip']['layers.1.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias']), 0)
new['clip']['layers.2.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight']), 0)
new['clip']['layers.2.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias']), 0)
new['clip']['layers.3.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight']), 0)
new['clip']['layers.3.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias']), 0)
new['clip']['layers.4.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight']), 0)
new['clip']['layers.4.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias']), 0)
new['clip']['layers.5.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight']), 0)
new['clip']['layers.5.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias']), 0)
new['clip']['layers.6.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight']), 0)
new['clip']['layers.6.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias']), 0)
new['clip']['layers.7.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight']), 0)
new['clip']['layers.7.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias']), 0)
new['clip']['layers.8.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight']), 0)
new['clip']['layers.8.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias']), 0)
new['clip']['layers.9.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight']), 0)
new['clip']['layers.9.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias']), 0)
new['clip']['layers.10.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight']), 0)
new['clip']['layers.10.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias']), 0)
new['clip']['layers.11.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight']), 0)
new['clip']['layers.11.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias']), 0)

@rnxyfvls
Copy link

rnxyfvls commented Mar 4, 2024

@treeform Would you mind giving permission to merge your code into this repository, under the MIT license, as part of pull request #16 ?

@treeform
Copy link
Author

treeform commented Mar 5, 2024

Sure, I don't mind.

@Amna-pro
Copy link

Amna-pro commented May 2, 2024

Can anyone guide me
if I use my dataset to train the model . and use this code to test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants