-
Notifications
You must be signed in to change notification settings - Fork 370
None: Replace the Tensor Layer name from "Unnamed Layer #" to its original name #2329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@nvpohanh for viz |
gs-olive
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change would be very beneficial for layer naming, thanks for the PR! It would be preferable to add this change to the Dynamo directory instead. Could you copy the function set_layer_name with your changes to the file py/torch_tensorrt/dynamo/conversion/converter_utils.py, and then update imports of set_layer_name in any file within the py/torch_tensorrt/dynamo/ directory to use this version of the function? The fx/ directory should preferably be unchanged.
|
|
||
| for i in range(layer.num_outputs): | ||
| output = layer.get_output(i) | ||
| layer.name.append(f"-[{output.name}]") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should layer.name.append(f"-[{output.name}]") instead be layer.name += f"-[{output.name}]"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wu6u3tw I think we should set the output tensor's name. Something like: output.name = "..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not just append all the output tensor names to layer name because that makes layer name very long
| else f"{source_ir}_ops.{target.__name__}" | ||
| ) | ||
| layer.name = f"[{layer.type.name}]-[{target_name}]-[{name}]" | ||
| layer.name = f"[{layer.type.name}]-[{target_name}]-[{name}]-[#_of_outputs_{layer.num_outputs}]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also set the layer metadata: layer.metadata = ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider moving [#_of_outputs_{layer.num_outputs}] into metadata instead of in layer name directly
Co-authored-by: gs-olive <113141689+gs-olive@users.noreply.github.com>
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
Dynamo converter support for torch.ops.aten.erf.default op
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
Signed-off-by: Bo Wang <bowa@nvidia.com>
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
|
@gs-olive thanks! I update the PR. Please let me know if there is any other advices. |
Description
To improve the debuggability, we try to set the layer name for each myelin output for a more meaningful naming so that we can replace the "Unnamed Layer #" with its original name.
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: