Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev #34

Merged
merged 23 commits into from
Mar 20, 2024
Merged

Dev #34

merged 23 commits into from
Mar 20, 2024

Conversation

EtienneDosSantos
Copy link
Owner

No description provided.

Increased the decoder guidance scale from 0.0 to 1.1 to enable guidance scaling. Values of 1.0 and below effectively disable this feature. The change will influence the model's output to be more closely aligned with the prompt.
Found a few lines in the loading process that are not needed, even slow the model loading process further down. See [issue #32 ](#32) for reference.
Added scheduler metadata
Added `run_bf16.py`, which is modified version of `run.py` for testing effects of `torch.float16` vs. `torch.bfloat16` in decoder.
Batch size issue fixed!
Deleted num_images_per_prompt bug notice
Changed torch.float16 to torch.bfloat16 for decoder, because loading speed is superior with torch.bfloat16.
Added charts from torch.float16 vs. torch.bfloat16 tests.
…s/charts/chart_dtype_inference_and_loading_speeds_compared.png
20.03.2024 updates
Finished "3. Test Decoder Dtype Influence"
Updated diffusers requirement, since fixing batch sizes necessitated
Added `use_safetensors=True` to ensure safetensors format is used over bin
Set guidance_scale (decoder) to 1.9 and num_inference_steps to 54 for the best image quality.
@EtienneDosSantos EtienneDosSantos merged commit cae9478 into main Mar 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant