Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multiple inputs pytorch model convert to coreml #1715

Open
GazeLei opened this issue Dec 15, 2022 · 5 comments
Open

multiple inputs pytorch model convert to coreml #1715

GazeLei opened this issue Dec 15, 2022 · 5 comments
Labels
PyTorch (not traced) question Response providing clarification needed. Will not be assigned to a release. (type)

Comments

@GazeLei
Copy link

GazeLei commented Dec 15, 2022

❓Question

model have four images inputs, like [3, 224, 224], [3, 224, 224], [3, 224, 224], [3, 16, 16]
input_types = [ct.ImageType(name="F", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="L", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="R", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="K", shape=(16, 16, 1), bias=image_bias, scale=image_scale)]

model is saved by torch.jit.save named as X, and load by torch.jit.load

when i use model = ct.convert(torch_model, inputs=input_types) get error:
ValueError: input_shape (length 0), kernel_shape (length 2), strides (length 2), dilations (length 2), and custom_pad (length 4) divided by two must all be the same length

@GazeLei GazeLei added the question Response providing clarification needed. Will not be assigned to a release. (type) label Dec 15, 2022
@TobyRoseman
Copy link
Collaborator

I don't have enough information to help. I have not seen this error before.

Can you get result from your traced PyTorch model? Can you give us steps to reproduce this problem?

@GazeLei
Copy link
Author

GazeLei commented Dec 15, 2022

model forward() def

`def forward(self, x_in):
# Eye nets
xEyeL = self.eyeModel(x_in[1])
xEyeR = self.eyeModel(x_in[2])
# Cat and FC
xEyes = torch.cat((xEyeL, xEyeR), 1)
xEyes = self.eyesFC(xEyes)

    # Face net
    xFace = self.faceModel(x_in[0])
    xGrid = self.gridModel(x_in[3])

    # Cat all
    x = torch.cat((xEyes, xFace, xGrid), 1)
    x = self.fc(x)
    
    return x`

input
batch_size = 3 face = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device) left = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device) right = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device) grid = torch.randn(batch_size, 1, 25, 25, requires_grad= True).to(device)

for input
image = {"left":left, "right":right, "face":face, "grid":grid}
args = (face, left, right, grid)

print('args',args[0])

load model and do output
net = model.ITrackerModel()
state_dict = torch.load(model_path)
device = torch.device("cpu")

net.to(device)
map_location = lambda storage, loc: storage
net.load_state_dict(state_dict,map_location)
net.eval()

gaze = net(args)
print('gaze',gaze)

the above is working fine and save the net to
torch.jit.save(torch.jit.script(net), "iTracker.pt")

load torch sript model and convert to coreml
torch_model = torch.jit.load("iTracker.pt")
torch_model.eval()

input_types = [ct.ImageType(name="face", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="left", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="right", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="grid", shape=(25, 25, 1), bias=image_bias, scale=image_scale)]
model = ct.convert(torch_model,
inputs=input_types)

save the model
model.save("iTracker.mlmodel")

got below:
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Converting PyTorch Frontend ==> MIL Ops: 12%|█▏ | 26/219 [00:00<00:00, 6404.27 ops/s]

File ~/enter/envs/py38tf/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py:444, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug)
441 if specification_version is None:
442 specification_version = _set_default_specification_version(exact_target)
--> 444 mlmodel = mil_convert(
445 model,
446 convert_from=exact_source,
447 convert_to=exact_target,
448 inputs=inputs,
449 outputs=outputs_as_tensor_or_image_types, # None or list[ct.ImageType/ct.TensorType]
450 classifier_config=classifier_config,
451 transforms=tuple(transforms),
452 skip_model_load=skip_model_load,
453 compute_units=compute_units,
...
250 custom_pad=custom_pad,
251 )
252 effective_ks = effective_kernel(kernel_shape, dilations)

ValueError: input_shape (length 0), kernel_shape (length 2), strides (length 2), dilations (length 2), and custom_pad (length 4) divided by two must all be the same length

@TobyRoseman
Copy link
Collaborator

@GazeLei - I don't understand your last comment. Please fix the formatting. Also please include everything we will need to actually run your code (ex: import statements, variable definitions). We should be able to just copy and paste the code in order to reproduce the problem.

@GazeLei
Copy link
Author

GazeLei commented Dec 19, 2022

Hi TobyRoseman, the model i try to convert to ios is from here, itracker, and below is all the code i try to convert the model. https://github.com/yihuacheng/Itracker/blob/main/Itracker/model.py

import model
import torch 
import torch.nn as nn
import torch.onnx
import onnx
#from onnx_tf.backend import prepare
import argparse
import os
import sys

import yaml
import copy
import math
import reader_gc
import numpy as np
import cv2 
import torch.optim as optim
import onnxruntime
import time
import tensorflow as tf
from onnx_tf.backend import prepare
from tensorflow import keras

net = model.ITrackerModel()
state_dict = torch.load(model_path)
device = torch.device("cpu")

net.to(device)
map_location = lambda storage, loc: storage
net.load_state_dict(state_dict,map_location)
net.eval()

batch_size = 3
feature = {"face":torch.randn(batch_size, 3, 224, 224, requires_grad= True),
            "left":torch.randn(batch_size, 3, 224, 224, requires_grad= True),
            "right":torch.randn(batch_size, 3, 224, 224, requires_grad= True),
            "grid":torch.randn(batch_size, 1, 25, 25, requires_grad= True)}

face = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device)
left = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device)
right = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device)
grid = torch.randn(batch_size, 1, 25, 25, requires_grad= True).to(device)

image = {"left":left, "right":right, "face":face, "grid":grid}

args = (face, left, right, grid)
//print('args',args[0])


gaze = net(args)
print('gaze',gaze)

gazes = net(args)
print('gazes', gazes)

//save as TorchScript
torch.jit.save(torch.jit.script(net), "iTracker.pt")

// set the input and output types
input_types = [ct.ImageType(name="face", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
                ct.ImageType(name="left", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
                ct.ImageType(name="right", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
                ct.ImageType(name="grid", shape=(25, 25, 1), bias=image_bias, scale=image_scale)]


//convert the model
model = ct.convert(torch_model,
                     inputs=input_types)

// save the model
model.save("iTracker.mlmodel")
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Converting PyTorch Frontend ==> MIL Ops:  12%|█▏        | 26/219 [00:00<00:00, 6404.27 ops/s]
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?1aeae247-d252-466d-805f-2f0a9b1b73ba)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/convert/convert.ipynb Cell 11 in <module>
      [6](vscode-notebook-cell:/convert/convert.ipynb#X24sZmlsZQ%3D%3D?line=5) output_types = [ct.ImageType(name="gaze", bias=image_bias, scale=image_scale)]
      [8](vscode-notebook-cell:/convert/convert.ipynb#X24sZmlsZQ%3D%3D?line=7) # convert the model
      [9](vscode-notebook-cell://convert/convert.ipynb#X24sZmlsZQ%3D%3D?line=8) # traced model
     [10](vscode-notebook-cell:/convert/convert.ipynb#X24sZmlsZQ%3D%3D?line=9) #traced_itracker = torch.jit.trace(net, args)
---> [12](vscode-notebook-cell:/convert/convert.ipynb#X24sZmlsZQ%3D%3D?line=11) model = ct.convert(torch_model,
     [13](vscode-notebook-cell:/convert/convert.ipynb#X24sZmlsZQ%3D%3D?line=12)                      inputs=input_types)
     [15](vscode-notebook-cell:/convert/convert.ipynb#X24sZmlsZQ%3D%3D?line=14) # save the model
     [16](vscode-notebook-cell:/convert/convert.ipynb#X24sZmlsZQ%3D%3D?line=15) model.save("iTracker.mlmodel")

File ~/enter/envs/py38tf/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py:444, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug)
    441 if specification_version is None:
    442     specification_version = _set_default_specification_version(exact_target)
--> 444 mlmodel = mil_convert(
    445     model,
    446     convert_from=exact_source,
    447     convert_to=exact_target,
    448     inputs=inputs,
    449     outputs=outputs_as_tensor_or_image_types, # None or list[ct.ImageType/ct.TensorType]
    450     classifier_config=classifier_config,
    451     transforms=tuple(transforms),
    452     skip_model_load=skip_model_load,
    453     compute_units=compute_units,
...
    250     custom_pad=custom_pad,
    251 )
    252 effective_ks = effective_kernel(kernel_shape, dilations)

ValueError: input_shape (length 0), kernel_shape (length 2), strides (length 2), dilations (length 2), and custom_pad (length 4) divided by two must all be the same length

@TobyRoseman
Copy link
Collaborator

Hi @GazeLei - This code is not minimal or self contained. Can you create a small amount of code that reproduces the problem?

Also it looks like you are not tracing your PyTorch model. We only have experimental support for converting non-traced PyTorch models. Can you try tracing your PyTorch model prior to conversion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PyTorch (not traced) question Response providing clarification needed. Will not be assigned to a release. (type)
Projects
None yet
Development

No branches or pull requests

2 participants