Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-652] Allow for multi-context input to load gluon models from onnx #11637

Merged
merged 3 commits into from
Jul 11, 2018

Conversation

OneRaynyDay
Copy link
Contributor

@OneRaynyDay OneRaynyDay commented Jul 10, 2018

Description

Currently, Airbnb is developing applications that use mxnet(particularly gluon). We use the mxnet.contrib.onnx package, which supports importing to gluon from ONNX format. As this is an extremely important part of our pipeline, we want to add support to multi-gpu imports, which is currently not supported.

This is actually a very simple change, because _load_init() already supports, directly, a Context object or an iterable of them. However, the current implementation for import_to_gluon suggests a str of 'CPU' or 'GPU'. This is not only different from other parts of the mxnet codebase, but also does not allow us to specify which gpu's we want to load the model into. I am aware that this is to comply with ONNX, but the abstraction should be handled at the abstraction layer, not the core utility for importing ONNX to gluon.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-652], where 652 refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Modified gluon model loading from ONNX to allow multi-context argument.

Comments

  • Currently, mxnet uses ctx as an arg for a large portion of the core library. There doesn't seem to be any significant reason for why the contrib.onnx code uses str instead, rather than it being a minor oversight that passed PR's.

@OneRaynyDay OneRaynyDay requested a review from szha as a code owner July 10, 2018 22:48
@@ -157,15 +156,15 @@ def get_graph_metadata(self, graph):
}
return metadata

def graph_to_gluon(self, graph, context):
def graph_to_gluon(self, graph, ctx):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this fix as we're already using ctx everywhere in gluon.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem. Thank you for reviewing! I appreciate the merge.

@szha szha merged commit f665c13 into apache:master Jul 11, 2018
@OneRaynyDay OneRaynyDay deleted the allow-multi-gpu-load-from-onnx branch July 11, 2018 06:08
XinYao1994 pushed a commit to XinYao1994/incubator-mxnet that referenced this pull request Aug 29, 2018
…nnx (apache#11637)

* Switch from string format to context objects in load from onnx

* Lint

* Addressed tests
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants