Skip to content
This repository has been archived by the owner on Feb 7, 2023. It is now read-only.

ONNX dynamic_axes cause compile time errors in final CoreML model #565

Open
BorisKourt opened this issue Apr 17, 2020 · 2 comments
Open
Labels
bug Unexpected behaviour that should be corrected (type)

Comments

@BorisKourt
Copy link

🐞Describe the bug

If an ONNX model is created with dynamic_axes then the subsequent CoreML model surfaces

 Description of image feature 'input_image' has missing or non-positive width 0.

and

Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.

Compilation errors in xcode.

Please note that this is regardless of whether flexible inputs/outputs are specified with:

    img_size_ranges.add_height_range((64, -1)) # or 64, 2045 etc.
    img_size_ranges.add_width_range((64, -1))  # or 64, 2045 etc.

During the onnx-coreml conversion step.

More details

If dynamic_axes are removed and the img_size_ranges are removed then the model operates correctly, though can only accept a single image size.

If dynamic_axes are removed, but the img_size_ranges are still used as above then the compilation errors (above) go away. But a runtime error is introduced:

Finalizing CVPixelBuffer 0x282f4c5a0 while lock count is 1.
[espresso] [Espresso::handle_ex_plan] exception=Invalid X-dimension 1/480 status=-7
[coreml] Error binding image input buffer input_image: -7
[coreml] Failure in bindInputsAndOutputs.

More details and additional discussion is available: https://stackoverflow.com/questions/61231340/input-input-image-of-layer-63-not-found-in-any-of-the-outputs-of-the-preceed

Specific Functions

The following was used to create the ONNX model, and then compile it to CoreML

def create_onnx(name):
    prior = torch.load("pth/" + name + ".pth")
    model = transformer.TransformerNetwork()
    model.load_state_dict(prior)

    dummy_input = torch.zeros(1, 3, 64, 64) # I wasn't sure what I would set the H W to here?

    torch.onnx.export(model, dummy_input, "onnx/" + name + ".onnx",
                      verbose=True,
                      opset_version=10,
                      input_names=["input_image"], # These are being renamed from garbled originals.
                      output_names=["stylized_image"], # ^
                      dynamic_axes={'input_image':
                                    {2: 'height', 3: 'width'},
                                    'stylized_image':
                                    {2: 'height', 3: 'width'}}
                      )

    onnx.save_model(original_model, "onnx/" + name + ".onnx")


def create_coreml(name):
    mlmodel = convert(
            model="onnx/" + name + ".onnx",
            preprocessing_args={'is_bgr': True},
            deprocessing_args={'is_bgr': True},
            image_input_names=['input_image'],
            image_output_names=['stylized_image'],
            minimum_ios_deployment_target='13'
            )

    spec = mlmodel.get_spec()

    img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange()

    img_size_ranges.add_height_range((64, -1))
    img_size_ranges.add_width_range((64, -1))

    flexible_shape_utils.update_image_size_range(
        spec,
        feature_name='input_image',
        size_range=img_size_ranges)

    flexible_shape_utils.update_image_size_range(
        spec,
        feature_name='stylized_image',
        size_range=img_size_ranges)

    mlmodel = coremltools.models.MLModel(spec)

    mlmodel.save("mlmodel/" + name + ".mlmodel")

System environment (please complete the following information):

  • coremltools version: 3.3
  • onnx-coreml version: 1.2
  • macOS version (if applicable): 10.15.4
  • python version: 3.7
@BorisKourt BorisKourt added the bug Unexpected behaviour that should be corrected (type) label Apr 17, 2020
@bhushan23
Copy link
Collaborator

@BorisKourt using flexible_shape_utils is right way to add different image size inputs.
How are you running you model? what's the input you are providing?

@BorisKourt
Copy link
Author

@bhushan23, thanks for your prompt reply. Here is the code that references or interacts with the model (Simplified, tell me if you need a particular part. I can also try to make a minimal reproducible example):

Load:

let bundle = Bundle(for: styleTransferModel.self)
let modelURL = bundle.url(forResource: modelName, withExtension:"mlmodelc")!
let theModel = try? styleTransferModel(contentsOf: modelURL)
var prediction: styleTransferModelOutput?

Prediction call:

guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return  }
prediction = try theModel?.prediction(input_image: imageBuffer)

Note that all of this code works if the model is not flexible.
The size of the input image is 480x640 (in all cases.)


I have also found this issue at coremltools: apple/coremltools#276 which seems to have a similar runtime error. Unfortunately it seems like a bit of a dead end, what do you think?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Unexpected behaviour that should be corrected (type)
Projects
None yet
Development

No branches or pull requests

2 participants