Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX To Pytorch Conversion #168

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

SuperSecureHuman
Copy link
Contributor

Addresses onnx to torch conversion from - #133

@SuperSecureHuman
Copy link
Contributor Author

SuperSecureHuman commented Feb 6, 2023

image

I have no idea where this error is being originated. Any help would be appreciated

Manually invoking the conversion works, I am not sure where the input is expected to be str/path.

The convert function can take in ModelProto, but the tests is giving it ModelParams (I think)

Edit: This is fixed now

@valeriosofi
Copy link
Collaborator

valeriosofi commented Feb 15, 2023

Hi @SuperSecureHuman, I tried in my local machine an optimization with the onnx->torch conversion and I found two issues:

  • in convert_onnx_to_torch() you save the model to disk and return the path to the model, but nebullvm expects the model to be a torch.nn.Module in the later steps, so I would modify the function like this:
    try:
        torch_model = torch.fx.symbolic_trace(convert(onnx_model))
        return torch_model
    except Exception as e:
        logger.warning("Exception raised during conversion of ONNX to Pytorch."
                        "ONNX to Torch pipeline will be skipped")
        logger.warning(e)
        return None

I had to add also torch.fx.symbolic_trace because otherwise the conversion to torchscript didn't work in the pytorch pipeline.

  • The PytorchBackendInferenceLearner expects input tensors to be PyTorch tensors, but in this case they will be Numpy arrays. We should implement a NumpyPytorchBackendInferenceLearner class that converts the np arrays to torch tensors before calling the PytorchBackendInferenceLearner run method and then converts the result back to a np array. We do the same thing in the other inference learners (you can check for example theONNXInferenceLearner, you will see that there are three additional classes implemented: PytorchONNXInferenceLearner, TensorflowONNXInferenceLearner, NumpyONNXInferenceLearner)

Can you please solve these two points? I would do it myself but now I'm working on stable diffusion and I have not much time. Thanks ;)

@SuperSecureHuman SuperSecureHuman marked this pull request as draft February 15, 2023 15:20
@SuperSecureHuman SuperSecureHuman marked this pull request as ready for review February 15, 2023 15:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants