Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generation with trained unet #13

Closed
bokyeong1015 opened this issue Aug 16, 2023 · 7 comments
Closed

Generation with trained unet #13

bokyeong1015 opened this issue Aug 16, 2023 · 7 comments
Assignees
Labels
enhancement New feature or request

Comments

@bokyeong1015
Copy link
Member

response to #10 (comment)

I want to conduct zero-shot MSCOCO evaluation for my intermediate checkpoint trained with multi-GPU setting, I'm not sure how to denote my checkpoint.

Could you give me some hints for this?

In your instruction(2), you enter model_id.

Could I change the model_id to my checkpoint path?

However, I don't know which one should be denoted.

I guess the unet_ema/diffusion_pytorch_model.bin. Am I right?

Thanks in advance.

@bokyeong1015 bokyeong1015 added the enhancement New feature or request label Aug 16, 2023
@bokyeong1015 bokyeong1015 self-assigned this Aug 16, 2023
@bokyeong1015
Copy link
Member Author

To generate with the trained U-Net, we used the unet folder (and NOT unet_ema folder).

Assuming the structure of a result directory as below:
results/kd_bk_small
|-- checkpoint-40000
| |-- unet
| |-- unet_ema
|-- checkpoint-45000
| |-- unet
| |-- unet_ema
|-- ...
|-- text_encoder
|-- unet
|-- vae

(1) To test with the last checkpoint (results/kd_bk_small/unet), use:
python3 src/generate.py --model_id results/kd_bk_small --save_dir $SAVE_DIR

(2) To test with a specific checkpoint (results/kd_bk_small/checkpoint-45000/unet), use:
python3 src/generate_with_trained_unet.py --unet_path results/kd_bk_small/checkpoint-45000 --save_dir $SAVE_DIR

@youngwanLEE
Copy link

@bokyeong1015 thanks for your quick solution.

When I tried to perform the above your instruction, I encountered this error.

It might be huggingface token.

Traceback (most recent call last):
File "/home/user01/bk-sdm-private/src/generate_with_trained_unet.py", line 34, in
pipeline.set_pipe_and_generator()
File "/home/user01/bk-sdm-private/src/utils/inference_pipeline.py", line 26, in set_pipe_and_generator
self.pipe = StableDiffusionPipeline.from_pretrained(self.weight_folder,
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 902, in from_pretrained
cached_folder = cls.download(
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 1314, in download
info = model_info(
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 1666, in model_info
headers = self._build_hf_headers(token=token)
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 5008, in _build_hf_headers
return build_hf_headers(
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/huggingface_hub/utils/_headers.py", line 121, in build_hf_headers
token_to_send = get_token_to_send(token)
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/huggingface_hub/utils/_headers.py", line 153, in get_token_to_send
raise LocalTokenNotFoundError(
huggingface_hub.utils._headers.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with huggingface-cli login or huggingface_hub.login. See https://huggingface.co/settings/tokens.

@bokyeong1015
Copy link
Member Author

bokyeong1015 commented Aug 16, 2023

@youngwanLEE Would you please remove use_auth_token=True at this line?

  • We’ve added that line to test our private models. Sorry for the inconvenience.

Or, if the above is not working, could you try out the below by referring to this page?

huggingface-cli login
huggingface-cli login --token $HUGGINGFACE_TOKEN

@youngwanLEE
Copy link

@bokyeong1015

Both two solutions showed this error:

Loading pipeline components...: 100%|███████████████████████████████████████████████████| 7/7 [00:00<00:00, 7.39it/s]
**load unet from ./results/kd_bk_small_8x4x8/checkpoint-20000/

0/30000 | COCO_val2014_000000000042.jpg A small dog is curled up on top of the shoes | 25 steps
Total 654.9M (U-Net 482.3M; TextEnc 123.1M; ImageDec 49.5M)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:01<00:00, 12.52it/s]
Traceback (most recent call last):
File "/home/user01/bk-sdm-private/src/generate_with_trained_unet.py", line 58, in
img = pipeline.generate(prompt = val_prompt,
File "/home/user01/bk-sdm-private/src/utils/inference_pipeline.py", line 35, in generate
out = self.pipe(
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/user01/anaconda3/envs/kd-sdm/lib/python3.9/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 706, in call
do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
TypeError: 'bool' object is not iterable

@bokyeong1015
Copy link
Member Author

Ahh @youngwanLEE thanks for the quick response.

It may be due to the different diffusers version [ref], and we would recommend you to

  • downgrade diffusers into our specified version, diffusers==0.15.0
  • (if you wish to maintain your version), please refer to here

@youngwanLEE
Copy link

@bokyeong1015 Ooops, I finally resolved this issue.

Many thanks !!

@bokyeong1015
Copy link
Member Author

bokyeong1015 commented Aug 16, 2023

Thanks for sharing, and sorry for the inconvenience you have experienced.
We will take steps to adjust our inference procedure with the latest versions of diffusers.

(+) to be resolved in #16

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants