Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster initialization of Inception family #2170

Merged
merged 1 commit into from May 5, 2020
Merged

Conversation

bisakhmondal
Copy link
Contributor

This PR tries to fix #2166.
It tries to solve the slow object construction for GoogleNet & Inception_v3.
Now, the default argument for init_weights is set to None. Users have to explicitly set init_weights = True or False for a new object, else it will raise an exception.

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

I have a few comments that I think could lead to a better user experience, could you have a look?

Comment on lines 71 to 72
raise ValueError('For fast initialization, init_weights (weights initialization) is set '
'to None. Pass init_weights = True or False.')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of raising a ValueError (which would be a BC-breaking change), can you instead use a FutureWarning as warnings.warn(msg, FutureWarning) or something like that, and then set init_weights=True? This way users won't see their code failing right-away.

I would also change the message to something a bit more clear for the end-users. Maybe something like

The default weight initialization of GoogleNet will change in future releases of torchvision. If you wish to keep the old behavior (which leads to long initialization times due to scipy/scipy#11299), please set init_weight=True.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review @fmassa. Actually I thought you were wishing some kind of exception.

forcing the users to be aware of it until they explicitly set the value to either True or False

We can definitely switch to using warning as at the current context it seems to be a better choice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fmassa, One question If we set init_weights=True after giving the warning, will it make any difference, as the object construction will still take lots of time even where default weights initialization is not required.
As you said here, maybe we can set init_weights=False and ask them if
they need weights

please set init_weight=True

What do you suggest? Thanks.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should keep the init_weights default to True for now, because we need to give users some time to update their codes. If the behavior suddenly change without at least one release in the middle, it would not be a great experience.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sure!! I understand.

super(Inception3, self).__init__()
if inception_blocks is None:
inception_blocks = [
BasicConv2d, InceptionA, InceptionB, InceptionC,
InceptionD, InceptionE, InceptionAux
]
if init_weights is None:
raise ValueError('For fast initialization, init_weights (weights initialization) is set '
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here

Comment on lines 53 to 56
if name in ['inception_v3', 'googlenet']:
model = torchvision.models.quantization.__dict__[name](pretrained=False, quantize=True, init_weights=True)
else:
model = torchvision.models.quantization.__dict__[name](pretrained=False, quantize=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe those changes in the test are only needed because we currently raise an error, but if we instead send a warning this wouldn't be necessary, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. For warning it's not required.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's not required, maybe it might be worth removing this then, except if the tests are now much faster to run?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not required. Removed.

Comment on lines 87 to 90
if name in ['inception_v3', 'googlenet']:
model = models.__dict__[name](num_classes=50, init_weights=True)
else:
model = models.__dict__[name](num_classes=50)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as before

warnings.warn('The default weight initialization of GoogleNet will be changed in future releases of '
'torchvision. If you wish to keep the old behavior (which leads to long initialization times'
' due to scipy/scipy#11299), please set init_weights=True.', FutureWarning)
init_weights = False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned before, can you set it to True instead so that users don't have a change in behavior?

@codecov-io
Copy link

codecov-io commented May 5, 2020

Codecov Report

Merging #2170 into master will decrease coverage by 0.00%.
The diff coverage is 0.00%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master   #2170      +/-   ##
=========================================
- Coverage    0.48%   0.48%   -0.01%     
=========================================
  Files          92      92              
  Lines        7442    7448       +6     
  Branches     1135    1138       +3     
=========================================
  Hits           36      36              
- Misses       7393    7399       +6     
  Partials       13      13              
Impacted Files Coverage Δ
torchvision/models/googlenet.py 0.00% <0.00%> (ø)
torchvision/models/inception.py 0.00% <0.00%> (ø)
torchvision/ops/misc.py 0.00% <0.00%> (ø)
torchvision/ops/boxes.py 0.00% <0.00%> (ø)
torchvision/transforms/transforms.py 0.00% <0.00%> (ø)
torchvision/models/segmentation/deeplabv3.py 0.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f9ef235...504e498. Read the comment docs.

@bisakhmondal bisakhmondal requested a review from fmassa May 5, 2020 16:57
Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot!

@fmassa fmassa merged commit e1a3042 into pytorch:master May 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make initialization of GoogleNet / Inception faster
3 participants