Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ONNXProgram.__call__ API to run model with ONNX Runtime #113495

Closed

Commits on Nov 10, 2023

  1. Add ONNXProgram.__call__ API to run model with ONNX Runtime

    Currently the user can use torch.onnx.dynamo_export to export the model
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiate a ONNX runtime to execute it
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the ONNXProgram.__call__ method as facilitator to use
    ONNX Runtime under the hood, similar to how
    torch.export.ExportedProgram.__call__ which allows the underlying
    torch.fx.GraphModule to be executed
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 10, 2023
    Configuration menu
    Copy the full SHA
    6790d4f View commit details
    Browse the repository at this point in the history

Commits on Nov 13, 2023

  1. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 13, 2023
    Configuration menu
    Copy the full SHA
    3e6a4c4 View commit details
    Browse the repository at this point in the history

Commits on Nov 14, 2023

  1. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 14, 2023
    Configuration menu
    Copy the full SHA
    9ff23aa View commit details
    Browse the repository at this point in the history
  2. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 14, 2023
    Configuration menu
    Copy the full SHA
    dd29573 View commit details
    Browse the repository at this point in the history
  3. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 14, 2023
    Configuration menu
    Copy the full SHA
    941365e View commit details
    Browse the repository at this point in the history
  4. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 14, 2023
    Configuration menu
    Copy the full SHA
    6efecf2 View commit details
    Browse the repository at this point in the history

Commits on Nov 15, 2023

  1. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 15, 2023
    Configuration menu
    Copy the full SHA
    15ef2d3 View commit details
    Browse the repository at this point in the history
  2. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 15, 2023
    Configuration menu
    Copy the full SHA
    32346e0 View commit details
    Browse the repository at this point in the history
  3. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 15, 2023
    Configuration menu
    Copy the full SHA
    04cab4e View commit details
    Browse the repository at this point in the history
  4. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 15, 2023
    Configuration menu
    Copy the full SHA
    e5814a1 View commit details
    Browse the repository at this point in the history

Commits on Nov 16, 2023

  1. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 16, 2023
    Configuration menu
    Copy the full SHA
    b69c023 View commit details
    Browse the repository at this point in the history
  2. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 16, 2023
    Configuration menu
    Copy the full SHA
    21472bb View commit details
    Browse the repository at this point in the history

Commits on Nov 17, 2023

  1. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 17, 2023
    Configuration menu
    Copy the full SHA
    608d53d View commit details
    Browse the repository at this point in the history
  2. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 17, 2023
    Configuration menu
    Copy the full SHA
    01629cc View commit details
    Browse the repository at this point in the history
  3. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 17, 2023
    Configuration menu
    Copy the full SHA
    9dcfbee View commit details
    Browse the repository at this point in the history
  4. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 17, 2023
    Configuration menu
    Copy the full SHA
    d6e7801 View commit details
    Browse the repository at this point in the history
  5. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 17, 2023
    Configuration menu
    Copy the full SHA
    b27f472 View commit details
    Browse the repository at this point in the history
  6. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 17, 2023
    Configuration menu
    Copy the full SHA
    25aa9cf View commit details
    Browse the repository at this point in the history

Commits on Nov 20, 2023

  1. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 20, 2023
    Configuration menu
    Copy the full SHA
    f5b9e17 View commit details
    Browse the repository at this point in the history

Commits on Nov 21, 2023

  1. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 21, 2023
    Configuration menu
    Copy the full SHA
    d5a49b1 View commit details
    Browse the repository at this point in the history
  2. Update on "Add ONNXProgram.__call__ API to run model with ONNX Runtime"

    Currently the user can use torch.onnx.dynamo_export to export the model.
    to ONNX.
    
    ```python
    import torch
    
    class Model(torch.nn.Module):
        def forward(self, x):
            return x + 1.0
    
    onnx_program = torch.onnx.dynamo_export(
        Model(),
        torch.randn(1, 1, 2, dtype=torch.float),
    )
    ```
    
    The next step would be instantiating a ONNX runtime to execute it.
    
    ```python
    import onnxruntime  # type: ignore[import]
    
    onnx_input = self.adapt_torch_inputs_to_onnx(*args, **kwargs)
    options = options or {}
    providers = options.get("providers", onnxruntime.get_available_providers())
    onnx_model = self.model_proto.SerializeToString()
    ort_session = onnxruntime.InferenceSession(onnx_model, providers=providers)
    
    def to_numpy(tensor):
        return (
            tensor.detach().cpu().numpy()
            if tensor.requires_grad
            else tensor.cpu().numpy()
        )
    
    onnxruntime_input = {
        k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)
    }
    
    return ort_session.run(None, onnxruntime_input)
    ```
    
    This PR provides the `ONNXProgram.__call__` method as facilitator to use ONNX Runtime under the hood, similar to how `torch.export.ExportedProgram.__call__` which allows the underlying `torch.fx.GraphModule` to be executed.
    
    [ghstack-poisoned]
    Thiago Crepaldi committed Nov 21, 2023
    Configuration menu
    Copy the full SHA
    25989b8 View commit details
    Browse the repository at this point in the history