-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ONNXProgram.__call__ API to run model with ONNX Runtime #113495
Add ONNXProgram.__call__ API to run model with ONNX Runtime #113495
Commits on Nov 10, 2023
-
Add ONNXProgram.__call__ API to run model with ONNX Runtime
Currently the user can use torch.onnx.dynamo_export to export the model to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiate a ONNX runtime to execute it ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the ONNXProgram.__call__ method as facilitator to use ONNX Runtime under the hood, similar to how torch.export.ExportedProgram.__call__ which allows the underlying torch.fx.GraphModule to be executed [ghstack-poisoned]
Thiago Crepaldi committedNov 10, 2023 Configuration menu - View commit details
-
Copy full SHA for 6790d4f - Browse repository at this point
Copy the full SHA 6790d4fView commit details
Commits on Nov 13, 2023
-
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 13, 2023 Configuration menu - View commit details
-
Copy full SHA for 3e6a4c4 - Browse repository at this point
Copy the full SHA 3e6a4c4View commit details
Commits on Nov 14, 2023
-
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 14, 2023 Configuration menu - View commit details
-
Copy full SHA for 9ff23aa - Browse repository at this point
Copy the full SHA 9ff23aaView commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 14, 2023 Configuration menu - View commit details
-
Copy full SHA for dd29573 - Browse repository at this point
Copy the full SHA dd29573View commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 14, 2023 Configuration menu - View commit details
-
Copy full SHA for 941365e - Browse repository at this point
Copy the full SHA 941365eView commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 14, 2023 Configuration menu - View commit details
-
Copy full SHA for 6efecf2 - Browse repository at this point
Copy the full SHA 6efecf2View commit details
Commits on Nov 15, 2023
-
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 15, 2023 Configuration menu - View commit details
-
Copy full SHA for 15ef2d3 - Browse repository at this point
Copy the full SHA 15ef2d3View commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 15, 2023 Configuration menu - View commit details
-
Copy full SHA for 32346e0 - Browse repository at this point
Copy the full SHA 32346e0View commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 15, 2023 Configuration menu - View commit details
-
Copy full SHA for 04cab4e - Browse repository at this point
Copy the full SHA 04cab4eView commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 15, 2023 Configuration menu - View commit details
-
Copy full SHA for e5814a1 - Browse repository at this point
Copy the full SHA e5814a1View commit details
Commits on Nov 16, 2023
-
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 16, 2023 Configuration menu - View commit details
-
Copy full SHA for b69c023 - Browse repository at this point
Copy the full SHA b69c023View commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 16, 2023 Configuration menu - View commit details
-
Copy full SHA for 21472bb - Browse repository at this point
Copy the full SHA 21472bbView commit details
Commits on Nov 17, 2023
-
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 17, 2023 Configuration menu - View commit details
-
Copy full SHA for 608d53d - Browse repository at this point
Copy the full SHA 608d53dView commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 17, 2023 Configuration menu - View commit details
-
Copy full SHA for 01629cc - Browse repository at this point
Copy the full SHA 01629ccView commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 17, 2023 Configuration menu - View commit details
-
Copy full SHA for 9dcfbee - Browse repository at this point
Copy the full SHA 9dcfbeeView commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 17, 2023 Configuration menu - View commit details
-
Copy full SHA for d6e7801 - Browse repository at this point
Copy the full SHA d6e7801View commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 17, 2023 Configuration menu - View commit details
-
Copy full SHA for b27f472 - Browse repository at this point
Copy the full SHA b27f472View commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 17, 2023 Configuration menu - View commit details
-
Copy full SHA for 25aa9cf - Browse repository at this point
Copy the full SHA 25aa9cfView commit details
Commits on Nov 20, 2023
-
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 20, 2023 Configuration menu - View commit details
-
Copy full SHA for f5b9e17 - Browse repository at this point
Copy the full SHA f5b9e17View commit details
Commits on Nov 21, 2023
-
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 21, 2023 Configuration menu - View commit details
-
Copy full SHA for d5a49b1 - Browse repository at this point
Copy the full SHA d5a49b1View commit details -
Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"
Currently the user can use torch.onnx.dynamo_export to export the model. to ONNX. ```python import torch class Model(torch.nn.Module): def forward(self, x): return x + 1.0 onnx_program = torch.onnx.dynamo_export( Model(), torch.randn(1, 1, 2, dtype=torch.float), ) ``` The next step would be instantiating a ONNX runtime to execute it. ```python import onnxruntime # type: ignore[import] onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs) options = options or {} providers = options.get("providers", onnxruntime.get_available_providers()) onnx_model = self.model_proto.SerializeToString() ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers) def to_numpy(tensor): return ( tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() ) onnxruntime_input = { k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input) } return ort_session.run(None, onnxruntime_input) ``` This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed. [ghstack-poisoned]
Thiago Crepaldi committedNov 21, 2023 Configuration menu - View commit details
-
Copy full SHA for 25989b8 - Browse repository at this point
Copy the full SHA 25989b8View commit details