Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output specifications support for Pytorch converter #775

Open
virgile-blg opened this issue Jul 9, 2020 · 12 comments
Open

Output specifications support for Pytorch converter #775

virgile-blg opened this issue Jul 9, 2020 · 12 comments
Labels
feature request Functionality does not currently exist, would need to be created as a new feature (type) PyTorch (traced)

Comments

@virgile-blg
Copy link

馃尡 Describe your Feature Request

As the documentation explicitly point it out: "it is good practice to think about the interface to the model. This includes names and types of inputs and outputs".
When converting a model from PyTorch, one can just not explicitly set output type, name and size. Those specifications are also very useful when we need to scale / preprocess inputs and outputs.

Use case

In my case I need my model to output an imageType instead of a multiArray. I'd like to provide specifications like this:

mlmodel = coremltools.convert(model,
                    inputs=[ct.ImageType(name="input", shape=(1, 3, 1024, 1024), scale=1/255)],
                    outputs=[ct.ImageType(name="output", shape=(1, 3, 1024, 1024), scale=255)])

but get the following error : ValueError: outputs must not be specified for PyTorch

Alternative

The alternative to get the output specifications right is to manually change them through the mlmodel specs :

  1. Specify multiArray shapes
spec = mlmodel.get_spec()
spec.description.output[0].type.multiArrayType.shape.append(3) 
spec.description.output[0].type.multiArrayType.shape.append(1024)
spec.description.output[0].type.multiArrayType.shape.append(1024)
  1. Set output type to imageType
from coremltools.proto import FeatureTypes_pb2 as ft 

for output in spec.description.output: 
    
    # Change output type
    output.type.imageType.colorSpace = ft.ImageFeatureType.ColorSpace.Value('RGB') 
    channels, height, width = tuple(output.type.multiArrayType.shape)
    
    # Set image shape
    output.type.imageType.width = width 
    output.type.imageType.height = height
  1. Convert back the modified specs into a new mlmodel

mlmodel_modif = coremltools.models.MLModel(spec)

But when calling predict on this new model i got the following error:

RuntimeError: {
    NSLocalizedDescription = "Batch or sequence image output is unsupported for image output 1223";
}

System environment :

  • coremltools version 4.0b1
  • Pytorch 1.5
  • MacOs 10.15.4
  • XCode 11.5
  • Python 3.7
@leovinus2001
Copy link
Contributor

Agreed.

  1. In coremltools v4.0b1, the document for the TF->CoreML >does< allow output naming i.e. it should be similar for PyTorch to Coreml
    https://coremltools.readme.io/docs/tensorflow-conversion-examples
  2. I could be wrong but I seem to remember that coremltools v3 did allow output naming via PyTorch->ONNX->Coreml which would make the v4b1 behavior for PyTorch a regression.

@1duo 1duo added PyTorch (traced) feature request Functionality does not currently exist, would need to be created as a new feature (type) labels Jul 9, 2020
@mushipand
Copy link

Any update on this?

@xtxt
Copy link

xtxt commented Sep 8, 2020

I have the same problem.

@JacopoMangiavacchi
Copy link

I had the same problem and I would love to being able to specifiy OUTPUTS tensor transpose and scaling in the convert method but anyway I solved this directly in the model adding permute and normalization in the forward call.

The following is a PyTorch snippet but you can do the same in TF:

    def forward(self, inputs):
        inputs = inputs.permute(0, 3, 1, 2)
        inputs = inputs / 127.5 - 1

        # call regular model 

        output = output.permute(0, 2, 3, 1)
        output = (output + 1) * 127.5
        output = output.clamp(0.0, 255.0)
        return output

@virgile-blg
Copy link
Author

Any update on this ?

@leegang
Copy link

leegang commented Nov 28, 2020

Agree.

@chinsyo
Copy link

chinsyo commented Dec 7, 2020

Any progress?

@igiloh
Copy link

igiloh commented Dec 23, 2020

I must say that ct.utils.rename_feature() is not a good enough workaround - since you need to know the arbitrary name the conversion gave to each output in order to change it. And any small change to the model changes that arbitrary name - making automated renaming impossible.

@mxkrn
Copy link

mxkrn commented Mar 23, 2021

Turns out using coremltools.models.utils.rename_feature() in combination with accessing the output elements in the protobuf specification of the model allows you to dynamically rename the output.

#!/usr/bin/env python3
import coremltools as ct

# get model specification
model_path = "path/to/model.mlmodel"
mlmodel = ct.models.MLModel(str(model_path))
spec = mlmodel.get_spec()

# get list of current output_names
current_output_names = len(mlmodel.output_description._fd_spec)

# rename first output in list to new_output_name
old_name = current_output_names[0].name
new_name = "output"
ct.utils.rename_feature(
    spec, old_name, new_name, rename_outputs=True
)

# overwite existing model spec with new renamed spec
new_model = ct.models.MLModel(spec)
new_model.save(model_path)

Only thing I'm not sure about is whether, if you have multiple outputs, the outputs in current_output_names are ordered according to the order as defined in the forward function in the pytorch model.

This is pretty hacky, I would still like to see a proper implementation for this but at least it does the trick.

@kognat-docs
Copy link

This feature is causing a performance issue when inferring a model, as I need to parse the multiArrayType by hand to get it back to being an image.

I am also seeing an issue where, if I set the input to be an image and use a Sequential model then it stops outputting the model at a certain point.

I might give reproduction steps and submit privately.

@dragen1860
Copy link

Any official support output imagetype ?

@nlml
Copy link

nlml commented Feb 17, 2022

Only thing I'm not sure about is whether, if you have multiple outputs, the outputs in current_output_names are ordered according to the order as defined in the forward function in the pytorch model.

The issue raised here by @mxkrn is also a big standout problem for me.

I have models being automatically generated and trained according to a config, and each has varying output heads. There can be a different number of outputs. coremltools convert method from the torch.jit.trace object gives output names like var_758, var_764. I can rename these to the names of the outputs, but how can I be sure of the order?

I've tried many different workarounds (having the module return a dict or namedtuple, pytorch->onnx->coreml conversion instead of pytorch->coreml) - nothing works!

Edit: just for reference, it seems that, at least in my specific case, coreml var_xxx output names are in the same order as the tuple returned by the pytorch trace. So maybe using the utils to rename the variables in alphabetical order will work.... still far from ideal though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Functionality does not currently exist, would need to be created as a new feature (type) PyTorch (traced)
Projects
None yet
Development

No branches or pull requests