New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiple inputs pytorch model convert to coreml #1715
Comments
I don't have enough information to help. I have not seen this error before. Can you get result from your traced PyTorch model? Can you give us steps to reproduce this problem? |
model forward() def `def forward(self, x_in):
input for input print('args',args[0]) load model and do output net.to(device) gaze = net(args) the above is working fine and save the net to load torch sript model and convert to coreml input_types = [ct.ImageType(name="face", shape=(224, 224, 3), bias=image_bias, scale=image_scale), save the model got below: File ~/enter/envs/py38tf/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py:444, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug) ValueError: input_shape (length 0), kernel_shape (length 2), strides (length 2), dilations (length 2), and custom_pad (length 4) divided by two must all be the same length |
@GazeLei - I don't understand your last comment. Please fix the formatting. Also please include everything we will need to actually run your code (ex: import statements, variable definitions). We should be able to just copy and paste the code in order to reproduce the problem. |
Hi TobyRoseman, the model i try to convert to ios is from here, itracker, and below is all the code i try to convert the model. https://github.com/yihuacheng/Itracker/blob/main/Itracker/model.py import model
import torch
import torch.nn as nn
import torch.onnx
import onnx
#from onnx_tf.backend import prepare
import argparse
import os
import sys
import yaml
import copy
import math
import reader_gc
import numpy as np
import cv2
import torch.optim as optim
import onnxruntime
import time
import tensorflow as tf
from onnx_tf.backend import prepare
from tensorflow import keras
net = model.ITrackerModel()
state_dict = torch.load(model_path)
device = torch.device("cpu")
net.to(device)
map_location = lambda storage, loc: storage
net.load_state_dict(state_dict,map_location)
net.eval()
batch_size = 3
feature = {"face":torch.randn(batch_size, 3, 224, 224, requires_grad= True),
"left":torch.randn(batch_size, 3, 224, 224, requires_grad= True),
"right":torch.randn(batch_size, 3, 224, 224, requires_grad= True),
"grid":torch.randn(batch_size, 1, 25, 25, requires_grad= True)}
face = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device)
left = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device)
right = torch.randn(batch_size, 3, 224, 224, requires_grad= True).to(device)
grid = torch.randn(batch_size, 1, 25, 25, requires_grad= True).to(device)
image = {"left":left, "right":right, "face":face, "grid":grid}
args = (face, left, right, grid)
//print('args',args[0])
gaze = net(args)
print('gaze',gaze)
gazes = net(args)
print('gazes', gazes)
//save as TorchScript
torch.jit.save(torch.jit.script(net), "iTracker.pt")
// set the input and output types
input_types = [ct.ImageType(name="face", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="left", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="right", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="grid", shape=(25, 25, 1), bias=image_bias, scale=image_scale)]
//convert the model
model = ct.convert(torch_model,
inputs=input_types)
// save the model
model.save("iTracker.mlmodel")
|
Hi @GazeLei - This code is not minimal or self contained. Can you create a small amount of code that reproduces the problem? Also it looks like you are not tracing your PyTorch model. We only have experimental support for converting non-traced PyTorch models. Can you try tracing your PyTorch model prior to conversion? |
❓Question
model have four images inputs, like [3, 224, 224], [3, 224, 224], [3, 224, 224], [3, 16, 16]
input_types = [ct.ImageType(name="F", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="L", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="R", shape=(224, 224, 3), bias=image_bias, scale=image_scale),
ct.ImageType(name="K", shape=(16, 16, 1), bias=image_bias, scale=image_scale)]
model is saved by torch.jit.save named as X, and load by torch.jit.load
when i use model = ct.convert(torch_model, inputs=input_types) get error:
ValueError: input_shape (length 0), kernel_shape (length 2), strides (length 2), dilations (length 2), and custom_pad (length 4) divided by two must all be the same length
The text was updated successfully, but these errors were encountered: