Skip to content

Conversation

kashif
Copy link
Contributor

@kashif kashif commented Sep 8, 2022

for issue #293

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Sep 8, 2022

The documentation is not available anymore as the PR was closed or merged.

@patrickvonplaten patrickvonplaten changed the title docs for models [Docs] Models Sep 8, 2022
Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool! Could we also add one more sentence to all public model classes as stated here:
https://github.com/huggingface/diffusers/pull/416/files#r965807798

Also could you also add a doctsring to the forward method? :-)

kashif and others added 3 commits September 8, 2022 12:54
Copy link
Member

@pcuenca pcuenca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! I just pointed out some mismatches between the docstrings and the formal arguments lists.

Comment on lines +211 to +212
heads (:obj:`int`, *optional*, defaults to 8): The number of heads to use for multi-head attention.
dim_head (:obj:`int`, *optional*, defaults to 64): The number of channels in each head.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should consolidate these names across modules in the future. The are called n_heads and d_head in BasicTransformerBlock.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving for future PR for now due to time constraint

up_block_types (`Tuple[str]`, *optional*, defaults to :
obj:`("UpDecoderBlock2D",)`): Tuple of upsample block types.
block_out_channels (`Tuple[int]`, *optional*, defaults to :
obj:`(64,)`): Tuple of block output channels.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing: layers_per_block

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving for future PR for now due to time constraint

up_block_types (`Tuple[str]`, *optional*, defaults to :
obj:`("UpDecoderBlock2D",)`): Tuple of upsample block types.
block_out_channels (`Tuple[int]`, *optional*, defaults to :
obj:`(64,)`): Tuple of block output channels.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing: layers_per_block

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving for future PR for now due to time constraint

patrickvonplaten and others added 3 commits September 8, 2022 15:08
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
@patrickvonplaten patrickvonplaten merged commit 5e6417e into huggingface:main Sep 8, 2022
@kashif kashif deleted the model-docs branch September 8, 2022 16:09
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* docs for attention

* types for embeddings

* unet2d docstrings

* UNet2DConditionModel docstrings

* fix typos

* style and vq-vae docstrings

* docstrings  for VAE

* Update src/diffusers/models/unet_2d.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

* added inherits from sentence

* docstring to forward

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finish model docs

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants