-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Closed
Description
According to PyTorch autograd documentation, autograd Function
s provide previous_functions
instance variables to track which autograd Functions produced their inputs. previous_functions
is necessary to reconstruct the DAG in Python code and potentially export it to visualizers/analyzers/other frameworks. I found that autograd Function
objects implemented in C++ code do not implement previous_functions
instance variables, which makes reconstructing the DAG impossible (I believe this information is preserved somewhere in the C++ classes, but didn't find a way to access it from Python).
Repro:
import torch.autograd
import torch.nn
def qualified_type(var):
if var is None:
return "None"
else:
module = var.__class__.__module__
type = var.__class__.__name__
if module is None or module == "__builtin__":
return type
else:
return module + "." + type
import torchvision.models as models
model = models.alexnet(pretrained=True)
input = torch.FloatTensor(1, 3, 224, 224)
input_var = torch.autograd.Variable(input)
output_var = model(input_var)
seen = set()
def add_nodes(var):
if var not in seen:
if isinstance(var, torch.autograd.Variable):
print("Variable: " + qualified_type(var))
else:
print("Function: " + qualified_type(var))
seen.add(var)
if hasattr(var, 'previous_functions'):
for u in var.previous_functions:
add_nodes(u[0])
elif hasattr(var, 'creator'):
if var.creator is not None:
add_nodes(var.creator)
add_nodes(output_var.creator)
Metadata
Metadata
Assignees
Labels
No labels