Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support face-parsing model #12

Closed
tucan9389 opened this issue Mar 11, 2021 · 3 comments
Closed

Support face-parsing model #12

tucan9389 opened this issue Mar 11, 2021 · 3 comments

Comments

@tucan9389
Copy link
Owner

tucan9389 commented Mar 11, 2021

Source Model Link

https://github.com/zllrunning/face-parsing.PyTorch

Core ML Model Download Link

https://github.com/tucan9389/SemanticSegmentation-CoreML/releases/download/support-face-parsing/FaceParsing.mlmodel

Model Spec

  • Input: 512x512 image
  • Output: 512x512 (Int32)
    • Catetory index of each pixel
    • Defined 19 categories: ['background', 'skin', 'l_brow', 'r_brow', 'l_eye', 'r_eye', 'eye_g', 'l_ear', 'r_ear', 'ear_r', 'nose', 'mouth', 'u_lip', 'l_lip', 'neck', 'neck_l', 'cloth', 'hair', 'hat']
  • Size: 52.7 MB
  • Inference time: 30-50 ms in iPhone 11 Pro

Conversion Script

import torch

import os.path as osp
import json
from PIL import Image
import torchvision.transforms as transforms
from model import BiSeNet

import coremltools as ct

dspth = 'res/test-img'
cp = '79999_iter.pth'
device = torch.device('cpu')

output_mlmodel_path = "FaceParsing.mlmodel"

labels = ['background', 'skin', 'l_brow', 'r_brow', 'l_eye', 'r_eye', 'eye_g', 'l_ear', 'r_ear', 'ear_r',
            'nose', 'mouth', 'u_lip', 'l_lip', 'neck', 'neck_l', 'cloth', 'hair', 'hat']
n_classes = len(labels)
print("n_classes:", n_classes)

class MyBiSeNet(torch.nn.Module):
    def __init__(self, n_classes, pretrained_model_path):
        super(MyBiSeNet, self).__init__()
        self.model = BiSeNet(n_classes=n_classes)
        self.model.load_state_dict(torch.load(pretrained_model_path, map_location=device))
        self.model.eval()

    def forward(self, x):
        x = self.model(x)
        x = x[0]
        x = torch.argmax(x, dim=1)
        x = torch.squeeze(x)
        return x

pretrained_model_path = osp.join('res/cp', cp)
model = MyBiSeNet(n_classes=n_classes, pretrained_model_path=pretrained_model_path)
model.eval()

example_input = torch.rand(1, 3, 512, 512)  # after test, will get 'size mismatch' error message with size 256x256
preprocess = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize(
        mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225],
    ),
])

traced_model = torch.jit.trace(model, example_input)


# Convert to Core ML using the Unified Conversion API
print(example_input.shape)

scale = 1.0 / (0.226 * 255.0)
red_bias   = -0.485 / 0.226
green_bias = -0.456 / 0.226
blue_bias  = -0.406 / 0.226

mlmodel = ct.convert(
    traced_model,
    inputs=[ct.ImageType(name="input",
                         shape=example_input.shape,
                         scale=scale,
                         color_layout="BGR",
                         bias=[blue_bias, green_bias, red_bias])], #name "input_1" is used in 'quickstart'
)



labels_json = {"labels": labels}

mlmodel.user_defined_metadata["com.apple.coreml.model.preview.type"] = "imageSegmenter"
mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)

mlmodel.save(output_mlmodel_path)

import coremltools.proto.FeatureTypes_pb2 as ft

spec = ct.utils.load_spec(output_mlmodel_path)

for feature in spec.description.output:
    if feature.type.HasField("multiArrayType"):
        feature.type.multiArrayType.dataType = ft.ArrayFeatureType.INT32

ct.utils.save_spec(spec, output_mlmodel_path)
@lekhanhtoan37
Copy link

Thanks you very much. It saves me a lot of time.

@vjdaemp
Copy link

vjdaemp commented Mar 25, 2021

@tucan9389 I am interested in using coremltools to convert pytorch models to coreml models. Is the coremltools sample the best guide to follow for learning purposes? https://coremltools.readme.io/docs/pytorch-conversion-examples

I am able to run the face-parsing pytorch model, and successfully run the test script (after modifications to script to remove hard-coded values). However when I run your conversion script I am getting the failures below:

(face_parsing) vtron@vtron-Blade-15:~/Desktop/Pytorch/face-parsing.PyTorch$ python coreml_convert.py 
n_classes: 19
torch.Size([1, 3, 512, 512])
Traceback (most recent call last):
  File "/home/vtron/Pytorch/face-parsing.PyTorch/coreml_convert.py", line 60, in <module>
    mlmodel = ct.convert(
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/_converters_entry.py", line 176, in convert
    mlmodel = mil_convert(
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 128, in mil_convert
    proto = mil_convert_to_proto(model, convert_from, convert_to,
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 171, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 85, in __call__
    return load(*args, **kwargs)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 85, in load
    raise e
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 75, in load
    prog = converter.convert()
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 220, in convert
    const = mb.const(val=val, mode=mode, name=name)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 62, in add_op
    return cls._add_op(op_cls, **kwargs)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/mil/builder.py", line 193, in _add_op
    new_op.type_value_inference()
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 189, in type_value_inference
    output_types = self.type_inference()
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/defs/control_flow.py", line 135, in type_inference
    builtin_type, _ = self._get_type_val(self.val.val)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/defs/control_flow.py", line 175, in _get_type_val
    _, builtin_type = numpy_val_to_builtin_val(value)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/mil/types/type_mapping.py", line 263, in numpy_val_to_builtin_val
    builtintype = numpy_type_to_builtin_type(npval.dtype)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/mil/types/type_mapping.py", line 233, in numpy_type_to_builtin_type
    raise TypeError("Unsupported numpy type: %s" % (nptype))
TypeError: Unsupported numpy type: float32

Any ideas or recommendations to resolve above issues would be greatly appreciated!

@vjdaemp
Copy link

vjdaemp commented Mar 26, 2021

@tucan9389 Numpy 1.20.0 is not compatible with coremltools, according to apple/coremltools#1077

I was able to resolve the exception by updating the environment to install version 1.19.5 of numpy.

After changing this I encountered new error:

n_classes: 19
torch.Size([1, 3, 512, 512])
Converting Frontend ==> MIL Ops:   5%|▎    | 15/295 [00:00<00:00, 3844.22 ops/s]
Traceback (most recent call last):
  File "/home/vtron/Development/Pytorch/face-parsing.PyTorch/coreml_convert.py", line 60, in <module>
    mlmodel = ct.convert(
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/_converters_entry.py", line 176, in convert
    mlmodel = mil_convert(
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 128, in mil_convert
    proto = mil_convert_to_proto(model, convert_from, convert_to,
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 171, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 85, in __call__
    return load(*args, **kwargs)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 85, in load
    raise e
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 75, in load
    prog = converter.convert()
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 224, in convert
    convert_nodes(self.context, self.graph)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 56, in convert_nodes
    _add_op(context, node)
  File "/home/vtron/anaconda3/envs/face_parsing/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 417, in _convolution
    raise ValueError(
ValueError: unexpected number of inputs for node input.1 (_convolution): 13

According to apple/coremltools#1012 coremltools version 4.1 must be used with torch 1.8. My environment was using 4.0.

I resolved this exception by installing coremltools 4.1 (instead of 4.0). The CoreML model has now been created :)

I will try using your conversion script as reference to create new conversion scripts for some other pytorch models I've been working with... Any advice or tips would be appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants