Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neural Graphs #520

Merged
merged 106 commits into from May 5, 2020
Merged

Neural Graphs #520

merged 106 commits into from May 5, 2020

Conversation

tkornuta-nvidia
Copy link
Contributor

@tkornuta-nvidia tkornuta-nvidia commented Mar 24, 2020

This (3rd!) PR follows the proposal from "Neural Graphs" design doc:
https://docs.google.com/document/d/1218tRm2XtfLbYJvoepnbg3ET1CkJ72CCBTGAUZ1G7gY/edit#

Additionally, it assumes that a graph is developed for training/inference, so changes the mode of the "connected" modules during its build.

  • Application State (singleton)
  • Registration of a Neural Graph
  • Recording of operation/modules forming a Graph
  • Input port binding - with default port name and option to provide a new name (manual)
  • Output port binding - with default port name and option to provide a new name (manual)
  • Graph nesting
  • Summary of graph/modules in a graph
  • Export of a graph to YML file
  • Import of a graph from YML file
  • Built-in handling of training/inference modes
  • Serialization of NeuralTypes for connections/inputs/outputs
  • Graphs with loops
  • Extended train() signature enabling to pass the "training_graph"

And a whole bunch of unit tests covering different aspects, from simple binding to "nesting of deserialized graph with input and output port bound into a graph with different ports bound" to "a graph with a loop"...

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@lgtm-com
Copy link

lgtm-com bot commented Mar 24, 2020

This pull request introduces 4 alerts when merging 2756a66 into 3f307f7 - view on LGTM.com

new alerts:

  • 2 for Variable defined multiple times
  • 1 for Unused import
  • 1 for Module is imported with 'import' and 'import from'

Signed-off-by: Jason <jasoli@nvidia.com>
@lgtm-com
Copy link

lgtm-com bot commented Apr 1, 2020

This pull request introduces 5 alerts when merging 5bad9c9 into da9760d - view on LGTM.com

new alerts:

  • 2 for Unused import
  • 2 for Variable defined multiple times
  • 1 for Module is imported with 'import' and 'import from'

@lgtm-com
Copy link

lgtm-com bot commented Apr 6, 2020

This pull request introduces 5 alerts when merging 0576ce7 into 68e02e2 - view on LGTM.com

new alerts:

  • 2 for Unused import
  • 2 for Variable defined multiple times
  • 1 for Module is imported with 'import' and 'import from'

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@lgtm-com
Copy link

lgtm-com bot commented Apr 7, 2020

This pull request introduces 8 alerts when merging 3f1ac01 into 68e02e2 - view on LGTM.com

new alerts:

  • 4 for Unused import
  • 2 for Variable defined multiple times
  • 1 for Unnecessary pass
  • 1 for Module is imported with 'import' and 'import from'

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
…ts failing

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
…stry, starting to work on graph unit tests

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@lgtm-com
Copy link

lgtm-com bot commented Apr 8, 2020

This pull request introduces 8 alerts when merging 9df6568 into 71e40ff - view on LGTM.com

new alerts:

  • 3 for Unused import
  • 2 for Variable defined multiple times
  • 1 for __eq__ not overridden when adding attributes
  • 1 for Unnecessary pass
  • 1 for Module is imported with 'import' and 'import from'

…ere, work in progress

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
…ionality with unit/integration tests

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@lgtm-com
Copy link

lgtm-com bot commented Apr 9, 2020

This pull request introduces 6 alerts when merging 069dd10 into 71e40ff - view on LGTM.com

new alerts:

  • 2 for Unused import
  • 2 for Variable defined multiple times
  • 1 for __eq__ not overridden when adding attributes
  • 1 for Module is imported with 'import' and 'import from'

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
… tests that check that, minor cleanups

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@lgtm-com
Copy link

lgtm-com bot commented May 1, 2020

This pull request fixes 3 alerts when merging e28979e into 73a9ec5 - view on LGTM.com

fixed alerts:

  • 1 for Non-callable called
  • 1 for Unused import
  • 1 for Module is imported with 'import' and 'import from'

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Copy link
Collaborator

@blisc blisc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Final Review, only one last comment about GraphOutputs. There are also 3 outstanding comments from previous reviews. So 4 total comments that need to be resolved.

Comment on lines 146 to 172
def bind(self, tensors_ref: List[NmTensor], port_names: Optional[str] = None):
"""
Binds the "default" outputs.

Args:
tensors_ref: List of tensors to be added.
port_names: List of port names (visible outside). If None: using internal tensor "output port names".
"""
# Set names.
if port_names is None:
port_names = [tensor.name for tensor in tensors_ref]

for name, tensor in zip(port_names, tensors_ref):
# Check the presence of the port name in "default" dictionary.
if name in self._default_outputs.keys():
# Name present - use the name being combination of producer and port names.
name = (
str(tensor.producer_step_number) + "_" + tensor.producer_name + "_" + tensor.name
) # last = port name

logging.warning(
"Setting unigue name of the default output port `{}` produced in step {} by `{}` to `{}`".format(
tensor.name, tensor.producer_step_number, tensor.producer_name, name
)
)
# Still, "overwrite" it.
self._default_outputs[name] = GraphOutput(tensor.ntype, tensor.producer_step_module_port)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we overwrite the name? We have tensor.unique_name, why not use it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it relies on uuid which is specific for a given process. Next time you run your script - uuid will be different.

What happens in here relates to binding ("recording") of "default" outputs (tensors). They are stored in a dictionary, where key = module output port name.

However, if you plug the same module twice in your graph, we do not want to override the already bound tensor. So for new one I generate "truly unique name" (step_producer_port) that can be exported/imported/nested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ps. there is a reason why e.g. layers in keras have a name property...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you use the same module twice, it should generate a new unique NmTensor, so I do not understand the argument for plugging the same module twice.

Is there an issue that arises when you import and export a graph? I fail to see why Next time you run your script - uuid will be different. is an issue.

@lgtm-com
Copy link

lgtm-com bot commented May 2, 2020

This pull request fixes 3 alerts when merging 693b744 into 73a9ec5 - view on LGTM.com

fixed alerts:

  • 1 for Non-callable called
  • 1 for Unused import
  • 1 for Module is imported with 'import' and 'import from'

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@lgtm-com
Copy link

lgtm-com bot commented May 2, 2020

This pull request fixes 3 alerts when merging 413c0fa into 73a9ec5 - view on LGTM.com

fixed alerts:

  • 1 for Non-callable called
  • 1 for Unused import
  • 1 for Module is imported with 'import' and 'import from'

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@tkornuta-nvidia tkornuta-nvidia requested a review from blisc May 2, 2020 04:00
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
…ith ObjectRegistry) from __init__ files

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
okuchaiev
okuchaiev previously approved these changes May 4, 2020
Copy link
Member

@okuchaiev okuchaiev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summarizing our phone conversation:

  1. please try to hide AppState for now - we'll properly work on it as part of next PRs
  2. try to group NeuralGraphs' related stuff under the same namespace in nemo.utils.name_of_your_choice
  3. Please address major comments from @blisc with him

Thanks!

…tate remained in the nemo.utils

Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@lgtm-com
Copy link

lgtm-com bot commented May 5, 2020

This pull request fixes 3 alerts when merging d2615e1 into ab3dfc4 - view on LGTM.com

fixed alerts:

  • 1 for Non-callable called
  • 1 for Unused import
  • 1 for Module is imported with 'import' and 'import from'

blisc
blisc previously approved these changes May 5, 2020
okuchaiev
okuchaiev previously approved these changes May 5, 2020
Signed-off-by: Tomasz Kornuta <tkornuta@nvidia.com>
@tkornuta-nvidia tkornuta-nvidia dismissed stale reviews from okuchaiev and blisc via da91ec0 May 5, 2020 20:02
@tkornuta-nvidia tkornuta-nvidia merged commit 6e062ec into master May 5, 2020
@tkornuta-nvidia tkornuta-nvidia deleted the fix-sign branch May 7, 2020 22:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants