-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output specifications support for Pytorch converter #775
Comments
Agreed.
|
Any update on this? |
I have the same problem. |
I had the same problem and I would love to being able to specifiy OUTPUTS tensor transpose and scaling in the convert method but anyway I solved this directly in the model adding permute and normalization in the forward call. The following is a PyTorch snippet but you can do the same in TF: def forward(self, inputs):
inputs = inputs.permute(0, 3, 1, 2)
inputs = inputs / 127.5 - 1
# call regular model
output = output.permute(0, 2, 3, 1)
output = (output + 1) * 127.5
output = output.clamp(0.0, 255.0)
return output |
Any update on this ? |
Agree. |
Any progress? |
I must say that |
Turns out using
Only thing I'm not sure about is whether, if you have multiple outputs, the outputs in This is pretty hacky, I would still like to see a proper implementation for this but at least it does the trick. |
This feature is causing a performance issue when inferring a model, as I need to parse the multiArrayType by hand to get it back to being an image. I am also seeing an issue where, if I set the input to be an image and use a Sequential model then it stops outputting the model at a certain point. I might give reproduction steps and submit privately. |
Any official support output imagetype ? |
The issue raised here by @mxkrn is also a big standout problem for me. I have models being automatically generated and trained according to a config, and each has varying output heads. There can be a different number of outputs. coremltools I've tried many different workarounds (having the module return a dict or namedtuple, pytorch->onnx->coreml conversion instead of pytorch->coreml) - nothing works! Edit: just for reference, it seems that, at least in my specific case, coreml |
馃尡 Describe your Feature Request
As the documentation explicitly point it out: "it is good practice to think about the interface to the model. This includes names and types of inputs and outputs".
When converting a model from PyTorch, one can just not explicitly set output type, name and size. Those specifications are also very useful when we need to scale / preprocess inputs and outputs.
Use case
In my case I need my model to output an imageType instead of a multiArray. I'd like to provide specifications like this:
but get the following error :
ValueError: outputs must not be specified for PyTorch
Alternative
The alternative to get the output specifications right is to manually change them through the mlmodel specs :
mlmodel_modif = coremltools.models.MLModel(spec)
But when calling
predict
on this new model i got the following error:System environment :
The text was updated successfully, but these errors were encountered: