Skip to content

Conversation

asi1024
Copy link
Contributor

@asi1024 asi1024 commented Feb 2, 2021

This PR implements UninitializedBuffer and LazyBatchnormXd based on #44538. (cc. @emcastillo and @albanD)

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 2, 2021

💊 CI failures summary and remediations

As of commit 623ec45 (more details on the Dr. CI page):


  • 2/3 failures possibly* introduced in this PR
    • 2/2 non-CircleCI failure(s)
  • 1/3 broken upstream at merge base d3023d8 on Feb 04 from 6:53pm to 11:23pm

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!
The batchnorm part looks good to me. Just small comments.

For the LazyBuffer, I think we can do some refactoring to simplify the code, see comment below.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paremeter -> buffer

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlike

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we share this list with the UninitializedParameter version above?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks almost identical the parameter version above. i wonder if we could reduce the code duplication with a UninitializedMixin that is shared for both here?

@H-Huang H-Huang added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Feb 2, 2021
@asi1024
Copy link
Contributor Author

asi1024 commented Feb 3, 2021

@albanD Thank you for the review! Could you take another look?

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update! It looks much better.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks good to me!
CI failures are real though, can you fix that? Let me know if you need help!

@asi1024
Copy link
Contributor Author

asi1024 commented Feb 5, 2021

@albanD I fixed mypy failure in 623ec45, but doc_test is still failing. Could you help me to pass the CI?

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey!
Thanks for fixing the mypy issues.
The doc failure does look unrelated to me.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

This pull request has been reverted by a930162.

@albanD
Copy link
Collaborator

albanD commented Feb 5, 2021

So the doc issues were real and broke master. Could you send a new PR with the same diff where we can check how to fix the doc?

@facebook-github-bot
Copy link
Contributor

@albanD merged this pull request in aa1fd6b.

@asi1024 asi1024 mentioned this pull request Feb 7, 2021
facebook-github-bot pushed a commit that referenced this pull request Feb 9, 2021
Summary:
Same diff with #51548 (cc. albanD)

Pull Request resolved: #51862

Reviewed By: izdeby

Differential Revision: D26312289

Pulled By: albanD

fbshipit-source-id: 9cdec0e0c9021c33d10d85010978c7fa5cb4dc60
facebook-github-bot pushed a commit that referenced this pull request Feb 18, 2021
Summary:
Some minor improvement for lazy modules introduced in #44538, #47350 and #51548.

This PR mainly turn the bias to `UninitializedParameter` and instead of creating empty tensors like
```python
self.bias = Parameter(torch.Tensor(0))
self.bias = UninitializedParameter()
```
I think it would be better to
```python
self.register_parameter('bias', None)
self.bias = UninitializedParameter()
```

In addition, I change the constructor of the `LazyBatchNorm` from
```python
self.running_mean = UninitializedBuffer()
```
to
```python
self.register_buffer('running_mean', UninitializedBuffer())
```
as the original one would not change the underlying `self._buffers`.

Thank you for your time on reviewing this PR :).

Gently ping albanD, mruberry

Pull Request resolved: #52212

Reviewed By: jbschlosser

Differential Revision: D26504508

Pulled By: albanD

fbshipit-source-id: 7094d0bb4fa9e2a40a07b79d350ea12a6ebfd080
xsacha pushed a commit to xsacha/pytorch that referenced this pull request Mar 31, 2021
Summary:
Same diff with pytorch#51548 (cc. albanD)

Pull Request resolved: pytorch#51862

Reviewed By: izdeby

Differential Revision: D26312289

Pulled By: albanD

fbshipit-source-id: 9cdec0e0c9021c33d10d85010978c7fa5cb4dc60
xsacha pushed a commit to xsacha/pytorch that referenced this pull request Mar 31, 2021
…h#52212)

Summary:
Some minor improvement for lazy modules introduced in pytorch#44538, pytorch#47350 and pytorch#51548.

This PR mainly turn the bias to `UninitializedParameter` and instead of creating empty tensors like
```python
self.bias = Parameter(torch.Tensor(0))
self.bias = UninitializedParameter()
```
I think it would be better to
```python
self.register_parameter('bias', None)
self.bias = UninitializedParameter()
```

In addition, I change the constructor of the `LazyBatchNorm` from
```python
self.running_mean = UninitializedBuffer()
```
to
```python
self.register_buffer('running_mean', UninitializedBuffer())
```
as the original one would not change the underlying `self._buffers`.

Thank you for your time on reviewing this PR :).

Gently ping albanD, mruberry

Pull Request resolved: pytorch#52212

Reviewed By: jbschlosser

Differential Revision: D26504508

Pulled By: albanD

fbshipit-source-id: 7094d0bb4fa9e2a40a07b79d350ea12a6ebfd080
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source Reverted triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants