-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Open
Labels
jit-backlogoncall: jitAdd this issue/PR to JIT oncall triage queueAdd this issue/PR to JIT oncall triage queuetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
I'm writing a C++ program that loads a module from disk and tries to work with it. Unlike nn::Module
, it appears that torch::jit::script::Module
has no zero_grad()
method. This makes it impossible to do optimization.
Environment
- PyTorch Version (e.g., 1.0): Nightly build downloaded Sept. 29, 2019
- OS (e.g., Linux): Linux
- How you installed PyTorch (
conda
,pip
, source): Downloaded from https://download.pytorch.org/libtorch/nightly/cu101/libtorch-cxx11-abi-shared-with-deps-latest.zip - Build command you used (if compiling from source): n/a
- Python version: n/a
- CUDA/cuDNN version: n/a
- GPU models and configuration: n/a
- Any other relevant information:
cc @suo
Metadata
Metadata
Assignees
Labels
jit-backlogoncall: jitAdd this issue/PR to JIT oncall triage queueAdd this issue/PR to JIT oncall triage queuetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module