-
Notifications
You must be signed in to change notification settings - Fork 568
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix qnn.TorchLayer
attributes
#4611
Conversation
[sc-45054] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks pretty good, but I'm not sure what standard we want to set for __setattr__
in general so I'm asking here
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #4611 +/- ##
==========================================
- Coverage 99.64% 99.64% -0.01%
==========================================
Files 375 375
Lines 33398 33397 -1
==========================================
- Hits 33280 33279 -1
Misses 118 118
☔ View full report in Codecov by Sentry. |
Co-authored-by: Matthew Silverman <matthews@xanadu.ai>
Co-authored-by: Matthew Silverman <matthews@xanadu.ai>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for addressing my comments, this looks great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small comment, but not really blocking
Fixes bug #4584 `torch.nn.Module` has some special `__getattr__` logic that we are ignoring if the qnode has been initialized. This bugfix makes it so the `qnn.TorchLayer.__getattr__` uses `torch.nn.Module.__getattr__` if the requested property is not on the QNode. --------- Co-authored-by: Matthew Silverman <matthews@xanadu.ai>
**Context:** Fixes #4999. We re-implemented the default `setattr` and `getattr` dunders for `KerasLayer` (needed to modify the QNode saved on the layer) but we should have called `super()`, because `keras.layer.Layer` has its own custom implementation that we needed to use. Our `TorchLayer` already calls `super` in these cases. Note - our existing tests were passing because the weights used had custom names ("w1", "w2", etc.), and this issue only comes up when your weights are named "weights" (which is a rather common case, I'm sure). **Description of the Change:** - Replace our default-handling of `get/set-attr` dunders with a call to `super()` - add a test case matching the one in the bug report using the "weights" identifier **Benefits:** Models using `KerasLayer` can be saved and loaded properly! **Possible Drawbacks:** N/A **Related GitHub Issues:** Reported initially in #4999, same as #4611 but for Keras
Fixes bug #4584
torch.nn.Module
has some special__getattr__
logic that we are ignoring if the qnode has been initialized. This bugfix makes it so theqnn.TorchLayer.__getattr__
usestorch.nn.Module.__getattr__
if the requested property is not on the QNode.