New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Neural Graphs #520
Neural Graphs #520
Conversation
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This pull request introduces 4 alerts when merging 2756a66 into 3f307f7 - view on LGTM.com new alerts:
|
Signed-off-by: Jason <jasoli@nvidia.com>
This pull request introduces 5 alerts when merging 5bad9c9 into da9760d - view on LGTM.com new alerts:
|
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This pull request introduces 5 alerts when merging 0576ce7 into 68e02e2 - view on LGTM.com new alerts:
|
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This pull request introduces 8 alerts when merging 3f1ac01 into 68e02e2 - view on LGTM.com new alerts:
|
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
…ts failing Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
…stry, starting to work on graph unit tests Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This pull request introduces 8 alerts when merging 9df6568 into 71e40ff - view on LGTM.com new alerts:
|
…ere, work in progress Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
…ionality with unit/integration tests Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This pull request introduces 6 alerts when merging 069dd10 into 71e40ff - view on LGTM.com new alerts:
|
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
… tests that check that, minor cleanups Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This pull request fixes 3 alerts when merging e28979e into 73a9ec5 - view on LGTM.com fixed alerts:
|
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Final Review, only one last comment about GraphOutputs. There are also 3 outstanding comments from previous reviews. So 4 total comments that need to be resolved.
def bind(self, tensors_ref: List[NmTensor], port_names: Optional[str] = None): | ||
""" | ||
Binds the "default" outputs. | ||
|
||
Args: | ||
tensors_ref: List of tensors to be added. | ||
port_names: List of port names (visible outside). If None: using internal tensor "output port names". | ||
""" | ||
# Set names. | ||
if port_names is None: | ||
port_names = [tensor.name for tensor in tensors_ref] | ||
|
||
for name, tensor in zip(port_names, tensors_ref): | ||
# Check the presence of the port name in "default" dictionary. | ||
if name in self._default_outputs.keys(): | ||
# Name present - use the name being combination of producer and port names. | ||
name = ( | ||
str(tensor.producer_step_number) + "_" + tensor.producer_name + "_" + tensor.name | ||
) # last = port name | ||
|
||
logging.warning( | ||
"Setting unigue name of the default output port `{}` produced in step {} by `{}` to `{}`".format( | ||
tensor.name, tensor.producer_step_number, tensor.producer_name, name | ||
) | ||
) | ||
# Still, "overwrite" it. | ||
self._default_outputs[name] = GraphOutput(tensor.ntype, tensor.producer_step_module_port) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we overwrite the name? We have tensor.unique_name, why not use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because it relies on uuid which is specific for a given process. Next time you run your script - uuid will be different.
What happens in here relates to binding ("recording") of "default" outputs (tensors). They are stored in a dictionary, where key = module output port name.
However, if you plug the same module twice in your graph, we do not want to override the already bound tensor. So for new one I generate "truly unique name" (step_producer_port) that can be exported/imported/nested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ps. there is a reason why e.g. layers in keras have a name property...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you use the same module twice, it should generate a new unique NmTensor, so I do not understand the argument for plugging the same module twice.
Is there an issue that arises when you import and export a graph? I fail to see why Next time you run your script - uuid will be different.
is an issue.
This pull request fixes 3 alerts when merging 693b744 into 73a9ec5 - view on LGTM.com fixed alerts:
|
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This pull request fixes 3 alerts when merging 413c0fa into 73a9ec5 - view on LGTM.com fixed alerts:
|
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
…ith ObjectRegistry) from __init__ files Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
examples/start_here/graph_composition_integration_tests0_jasper.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summarizing our phone conversation:
- please try to hide AppState for now - we'll properly work on it as part of next PRs
- try to group NeuralGraphs' related stuff under the same namespace in nemo.utils.name_of_your_choice
- Please address major comments from @blisc with him
Thanks!
…tate remained in the nemo.utils Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This pull request fixes 3 alerts when merging d2615e1 into ab3dfc4 - view on LGTM.com fixed alerts:
|
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
This (3rd!) PR follows the proposal from "Neural Graphs" design doc:
https://docs.google.com/document/d/1218tRm2XtfLbYJvoepnbg3ET1CkJ72CCBTGAUZ1G7gY/edit#
Additionally, it assumes that a graph is developed for training/inference, so changes the mode of the "connected" modules during its build.
And a whole bunch of unit tests covering different aspects, from simple binding to "nesting of deserialized graph with input and output port bound into a graph with different ports bound" to "a graph with a loop"...