Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'ToMeBlock' object has no attribute 'drop_path' #5

Closed
kos94ok opened this issue Oct 22, 2022 · 7 comments
Closed

AttributeError: 'ToMeBlock' object has no attribute 'drop_path' #5

kos94ok opened this issue Oct 22, 2022 · 7 comments

Comments

@kos94ok
Copy link

kos94ok commented Oct 22, 2022

File "/tome/tome/patch/timm.py", line 35, in forward x = x + self.drop_path_rate(x_attn) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1208, in __getattr__ type(self).__name__, name)) AttributeError: 'ToMeBlock' object has no attribute 'drop_path_rate'

@dbolya
Copy link
Contributor

dbolya commented Oct 22, 2022

/tome/patch/timm.py contains no mention of "drop_path_rate". Did you edit the code?

@kos94ok
Copy link
Author

kos94ok commented Oct 22, 2022

/tome/patch/timm.py contains no mention of "drop_path_rate". Did you edit the code?

File "/content/tome/tome/patch/timm.py", line 33, in forward x = x + self.drop_path(x_attn) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1208, in __getattr__ type(self).__name__, name)) AttributeError: 'ToMeBlock' object has no attribute 'drop_path'

@dbolya
Copy link
Contributor

dbolya commented Oct 22, 2022

drop_path should exist in timm 0.4.12: https://github.com/rwightman/pytorch-image-models/blob/7096b52a613eefb4f6d8107366611c8983478b19/timm/models/vision_transformer.py#L207

Are you using the right version of timm (0.4.12) and passing in a timm model?

@dbolya
Copy link
Contributor

dbolya commented Oct 22, 2022

Alternatively, you can replace drop_path with an identity since that's only necessary during training.

@kos94ok
Copy link
Author

kos94ok commented Oct 22, 2022

drop_path should exist in timm 0.4.12: https://github.com/rwightman/pytorch-image-models/blob/7096b52a613eefb4f6d8107366611c8983478b19/timm/models/vision_transformer.py#L207

Are you using the right version of timm (0.4.12) and passing in a timm model?

I use timm==0.6.11

My code:

...

class Encoder(VisionTransformer):

    def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.,
             qkv_bias=True, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed):
    super().__init__(img_size, patch_size, in_chans, embed_dim=embed_dim, depth=depth, num_heads=num_heads,
                     mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop_rate=drop_rate, attn_drop_rate=attn_drop_rate,
                     drop_path_rate=drop_path_rate, embed_layer=embed_layer,
                     num_classes=0, global_pool='', class_token=False)  # these disable the classifier head

    def forward(self, x):
        # Return all tokens
        return self.forward_features(x)
...

self.encoder = Encoder(img_size, patch_size, embed_dim=embed_dim, depth=enc_depth, num_heads=enc_num_heads,
                           mlp_ratio=enc_mlp_ratio)
tome.patch.timm(self.encoder, prop_attn=False)
self.encoder.r = 16

...

@kos94ok kos94ok changed the title AttributeError: 'ToMeBlock' object has no attribute 'drop_path_rate' AttributeError: 'ToMeBlock' object has no attribute 'drop_rate' Oct 22, 2022
@kos94ok kos94ok changed the title AttributeError: 'ToMeBlock' object has no attribute 'drop_rate' AttributeError: 'ToMeBlock' object has no attribute 'drop_path' Oct 22, 2022
@dbolya
Copy link
Contributor

dbolya commented Oct 22, 2022

Ah, we don't yet support higher versions of timm so you'll have to install 0.4.12.

@kos94ok
Copy link
Author

kos94ok commented Oct 22, 2022

Ah, we don't yet support higher versions of timm so you'll have to install 0.4.12.

I think I solved it

timm==0.6.11

I replace (tome/tome/patch/timm.py)

x = x + self.drop_path(x_attn)
x = x + self.drop_path(self.mlp(self.norm2(x)))

to

x = x + self.drop_path1(x_attn)
x = x + self.drop_path1(self.mlp(self.norm2(x)))

@dbolya dbolya closed this as completed in 638fc08 Nov 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants