Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple outputs in extract_submodel for >=iOS 16 #2270

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

smpanaro
Copy link
Contributor

@smpanaro smpanaro commented Jul 8, 2024

Using the extract_submodel debugging_util to add additional outputs fails for deployment target >=iOS 16.

Running this script:

import coremltools as ct
from coremltools.converters.mil.debugging_utils import extract_submodel
import torch
from torch import nn
import numpy as np

class Net(nn.Module):
    def forward(self, x):
        x = x * x

        chunks = x.chunk(5, dim=-1)
        transformed = []
        for i in range(len(chunks)):
            transformed.append(chunks[i] * i)

        x = torch.cat(transformed, dim=-1)
        x = x ** 0.5
        return x

sample_input = torch.randn(1,32,1,512)
full_model = ct.convert(torch.jit.trace(Net().eval(), sample_input),
                        inputs=[ct.TensorType(shape=sample_input.shape, dtype=np.float16)],
                        minimum_deployment_target=ct.target.iOS16,
                        convert_to="mlprogram")
print("Full model:")
print(full_model._mil_program)
full_model.save("full_model.mlpackage")

# var_22 is the original output. var_15_cast_fp16 is an intermediate tensor that is being added as an output.
submodel = extract_submodel(full_model, outputs=["var_22", "var_15_cast_fp16"])
print("Submodel:")
print(submodel._mil_program)
submodel.save("submodel.mlpackage")

On 8.0b1:

Full model:

main[CoreML6](%x_1: (1, 32, 1, 512, fp16)(Tensor)) {
  block0() {
    %x_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = mul(x=%x_1, y=%x_1, name="x_cast_fp16")
    %var_3_cast_fp16_0: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_1: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_2: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_3: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_4: (1, 32, 1, 100, fp16)(Tensor) = split(x=%x_cast_fp16, split_sizes=[103, 103, 103, 103, 100], axis=-1, name="op_3_cast_fp16")
    %var_9_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_0, y=0.0, name="op_9_cast_fp16")
    %var_13_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_2, y=2.0, name="op_13_cast_fp16")
    %var_15_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_3, y=3.0, name="op_15_cast_fp16")
    %var_17_cast_fp16: (1, 32, 1, 100, fp16)(Tensor) = mul(x=%var_3_cast_fp16_4, y=4.0, name="op_17_cast_fp16")
    %var_20_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = concat(values=(%var_9_cast_fp16, %var_3_cast_fp16_1, %var_13_cast_fp16, %var_15_cast_fp16, %var_17_cast_fp16), axis=-1, interleave=False, name="op_20_cast_fp16")
    %var_22: (1, 32, 1, 512, fp16)(Tensor) = pow(x=%var_20_cast_fp16, y=0.5, name="op_22_cast_fp16")
  } -> (%var_22)
}

Running MIL frontend_milinternal pipeline: 0 passes [00:00, ? passes/s]
Running MIL default pipeline: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 79/79 [00:00<00:00, 11800.21 passes/s]
Running MIL backend_mlprogram pipeline: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 11387.25 passes/s]
Traceback (most recent call last):
  File "/[removed]/submodel.py", line 29, in <module>
    submodel = extract_submodel(full_model, outputs=["var_22", "var_15_cast_fp16"])
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/debugging_utils.py", line 173, in extract_submodel
    submodel = ct.convert(prog, convert_to=backend, compute_units=model.compute_unit)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py", line 635, in convert
    mlmodel = mil_convert(
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 188, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 212, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 307, in mil_convert_to_proto
    out = backend_converter(prog, **kwargs)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 130, in __call__
    return backend_load(*args, **kwargs)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/backend/mil/load.py", line 1072, in load
    return coreml_proto_exporter.export(specification_version)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/backend/mil/load.py", line 1008, in export
    func_to_output[name] = self.get_func_output(func)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/backend/mil/load.py", line 843, in get_func_output
    assert len(output_types) == len(
AssertionError: number of mil program outputs do not match the number of outputs provided by the user

The issues seems to be that the original output has an entry in output_types but the new output does not.

I'm not sure if there is a better way to fix this. It won't work for Image outputs. It seems like passing None to set_output_types would also work. Happy to make changes if needed.

@YifanShenSZ
Copy link
Collaborator

@jakesabathia2 who added extract_submodel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants