Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Cannot use Core ML conversion pipeline on versions >= 0.11.0 #1652

Closed
3 tasks done
Typiqally opened this issue Jan 13, 2023 · 9 comments
Closed
3 tasks done

[Bug] Cannot use Core ML conversion pipeline on versions >= 0.11.0 #1652

Typiqally opened this issue Jan 13, 2023 · 9 comments

Comments

@Typiqally
Copy link
Contributor

Typiqally commented Jan 13, 2023

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

I am attempting to update MMDeploy from version 0.10.0 to the latest version 0.12.0. However, this causes the Core ML conversion pipeline to break giving an unknown error (see stack trace section). I'm using exactly the same dependencies that I've used in version 0.10.0, which worked perfectly.

I've also tested version 0.11.0, and can conclude that everything after version 0.10.0 breaks the Core ML conversion pipeline. Furthermore, I'm not exactly sure which commit caused this issue, but I believe the breaking change is somewhere between version 0.10.0 and 0.11.0.

Interesting to note is that the check_env.py script does not show that Core ML is available, even though the Core ML tools package is installed and functional.

Reproduction

python libs/mmdeploy/tools/deploy.py libs/mmdeploy/configs/mmdet/detection/detection_coreml_static-800x1344.py checkpoints/retinanet_r18_fpn_1x_coco.py checkpoints/retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth include/demo.jpg

Environment

(mmdeploy-coreml) typically@macos deploy % python libs/mmdeploy/tools/check_env.py
/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
  warnings.warn(
2023-01-13 11:24:57,391 - mmdeploy - INFO - 

2023-01-13 11:24:57,391 - mmdeploy - INFO - **********Environmental information**********
2023-01-13 11:24:57,559 - mmdeploy - INFO - sys.platform: darwin
2023-01-13 11:24:57,560 - mmdeploy - INFO - Python: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:05:16) [Clang 12.0.1 ]
2023-01-13 11:24:57,560 - mmdeploy - INFO - CUDA available: False
2023-01-13 11:24:57,560 - mmdeploy - INFO - GCC: Apple clang version 14.0.0 (clang-1400.0.29.202)
2023-01-13 11:24:57,560 - mmdeploy - INFO - PyTorch: 1.9.0.post2
2023-01-13 11:24:57,560 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 4.2
  - C++ Version: 201402
  - clang 11.1.0
  - OpenMP 201811
  - NNPACK is enabled
  - CPU capability usage: NO AVX
  - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/Users/runner/miniforge3/conda-bld/pytorch-recipe_1629200524980/_build_env/bin/arm64-apple-darwin20.0.0-clang++, CXX_FLAGS=-ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -stdlib=libc++  -std=c++14 -fmessage-length=0 -isystem /Users/runner/miniforge3/conda-bld/pytorch-recipe_1629200524980/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac/include -fdebug-prefix-map=/Users/runner/miniforge3/conda-bld/pytorch-recipe_1629200524980/work=/usr/local/src/conda/pytorch-1.9.0 -fdebug-prefix-map=/Users/runner/miniforge3/conda-bld/pytorch-recipe_1629200524980/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac=/usr/local/src/conda-prefix -Wno-deprecated-declarations -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp=libomp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const, LAPACK_INFO=open, TORCH_VERSION=1.9.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, 

2023-01-13 11:24:57,560 - mmdeploy - INFO - TorchVision: 0.10.0a0
2023-01-13 11:24:57,560 - mmdeploy - INFO - OpenCV: 4.6.0
2023-01-13 11:24:57,560 - mmdeploy - INFO - MMCV: 1.7.0
2023-01-13 11:24:57,560 - mmdeploy - INFO - MMCV Compiler: clang 14.0.0
2023-01-13 11:24:57,560 - mmdeploy - INFO - MMCV CUDA Compiler: not available
2023-01-13 11:24:57,560 - mmdeploy - INFO - MMDeploy: 0.12.0+e1bff49
2023-01-13 11:24:57,560 - mmdeploy - INFO - 

2023-01-13 11:24:57,560 - mmdeploy - INFO - **********Backend information**********
2023-01-13 11:24:57,569 - mmdeploy - INFO - tensorrt:	None
2023-01-13 11:24:57,588 - mmdeploy - INFO - ONNXRuntime:	1.13.1
2023-01-13 11:24:57,588 - mmdeploy - INFO - ONNXRuntime-gpu:	None
2023-01-13 11:24:57,588 - mmdeploy - INFO - ONNXRuntime custom ops:	NotAvailable
2023-01-13 11:24:57,589 - mmdeploy - INFO - pplnn:	None
2023-01-13 11:24:57,598 - mmdeploy - INFO - ncnn:	None
2023-01-13 11:24:57,600 - mmdeploy - INFO - snpe:	None
2023-01-13 11:24:57,601 - mmdeploy - INFO - openvino:	None
2023-01-13 11:24:57,602 - mmdeploy - INFO - torchscript:	1.9.0.post2
2023-01-13 11:24:57,602 - mmdeploy - INFO - torchscript custom ops:	Available
2023-01-13 11:24:57,615 - mmdeploy - INFO - rknn-toolkit:	None
2023-01-13 11:24:57,615 - mmdeploy - INFO - rknn2-toolkit:	None
2023-01-13 11:24:57,616 - mmdeploy - INFO - ascend:	None
2023-01-13 11:24:57,616 - mmdeploy - INFO - coreml:	None
2023-01-13 11:24:57,617 - mmdeploy - INFO - tvm:	None
2023-01-13 11:24:57,617 - mmdeploy - INFO - 

2023-01-13 11:24:57,617 - mmdeploy - INFO - **********Codebase information**********
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmdet:	2.25.3
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmseg:	None
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmcls:	None
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmocr:	None
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmedit:	None
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmdet3d:	None
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmpose:	None
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmrotate:	None
2023-01-13 11:24:57,618 - mmdeploy - INFO - mmaction:	None

Error traceback

2023-01-13 11:22:58,044 - mmdeploy - INFO - Save PyTorch model: /Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/end2end.pt.
2023-01-13 11:22:58,142 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2torchscript.torch2torchscript
2023-01-13 11:22:58,275 - mmdeploy - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in main process
WARNING:root:Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
WARNING:root:Tuple detected at graph output. This will be flattened in the converted model.
Converting PyTorch Frontend ==> MIL Ops:  72%|██████████████████████████████████████████████████████▏                    | 777/1076 [00:00<00:00, 8657.68 ops/s]
Traceback (most recent call last):
  File "libs/mmdeploy/tools/deploy.py", line 308, in <module>
    main()
  File "libs/mmdeploy/tools/deploy.py", line 232, in main
    backend_files = to_backend(
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
    return self.call_function_local(func_name, *args, **kwargs)
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
    return pipe_caller(*args, **kwargs)
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/utils/utils.py", line 95, in to_backend
    return backend_mgr.to_backend(
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/backend/coreml/backend_manager.py", line 88, in to_backend
    from_torchscript(model_id, torchscript_path, output_file_prefix,
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/backend/coreml/torchscript2coreml.py", line 118, in from_torchscript
    mlmodel = ct.convert(
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 451, in convert
    mlmodel = mil_convert(
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 193, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 220, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 283, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 115, in __call__
    return load(*args, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 53, in load
    return _perform_torch_convert(converter, debug)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 92, in _perform_torch_convert
    prog = converter.convert()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 269, in convert
    convert_nodes(self.context, self.graph)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 92, in convert_nodes
    add_op(context, node)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 3872, in new_zeros
    context.add(mb.fill(shape=shape, value=0., name=node.name))
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 172, in add_op
    return cls._add_op(op_cls_to_add, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/mil/builder.py", line 175, in _add_op
    new_op = op_cls(**kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/tensor_operation.py", line 205, in __init__
    super().__init__(**kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/mil/operation.py", line 170, in __init__
    self._validate_and_set_inputs(input_kv)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/mil/operation.py", line 458, in _validate_and_set_inputs
    self.input_spec.validate_inputs(self.name, self.op_type, input_kvs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/mmdeploy-coreml/lib/python3.8/site-packages/coremltools/converters/mil/mil/input_type.py", line 125, in validate_inputs
    raise ValueError(msg.format(name, var.name, input_type.type_str,
ValueError: Op "1152" (op_type: fill) Input shape="1151" expects integer tensor but got tensor[0,fp32]
@grimoire
Copy link
Member

Thanks for the notification and sorry for making you trouble.
We have done some tricks on torch.topk to fix the GPU export problem. I guess that fix leads to the error.
I have created a rewriter for CoreML topk op. Please have a try. https://github.com/grimoire/mmdeploy/tree/fix-coreml

@Typiqally
Copy link
Contributor Author

Same environment from before, but using your patch I get the following stack trace:

2023-01-13 13:35:36,529 - mmdeploy - INFO - Save PyTorch model: /Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/end2end.pt.
2023-01-13 13:35:36,620 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2torchscript.torch2torchscript
2023-01-13 13:35:36,759 - mmdeploy - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in main process
Traceback (most recent call last):
  File "libs/mmdeploy/tools/deploy.py", line 308, in <module>
    main()
  File "libs/mmdeploy/tools/deploy.py", line 232, in main
    backend_files = to_backend(
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
    return self.call_function_local(func_name, *args, **kwargs)
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
    return pipe_caller(*args, **kwargs)
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/utils/utils.py", line 95, in to_backend
    return backend_mgr.to_backend(
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/backend/coreml/backend_manager.py", line 83, in to_backend
    from .torchscript2coreml import from_torchscript, get_model_suffix
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/backend/coreml/__init__.py", line 13, in <module>
    from .torchscript2coreml import get_model_suffix
  File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/backend/coreml/torchscript2coreml.py", line 52, in <module>
    input_names: list[str],
TypeError: 'type' object is not subscriptable

@grimoire
Copy link
Member

@Typiqally updated please try again.

@Typiqally
Copy link
Contributor Author

Thank you @grimoire, it works now. I haven't tested the model completely, but the visualization from the deployment shows that it is working as expected.

@irexyc irexyc mentioned this issue Jan 16, 2023
@JohannesBauer97
Copy link

Hi @grimoire I would be happy to see your check_env.py output because the export to Core ML is still not working for me (using the same config and checkpoint), caused by a coremltools error.

I'm running everything on Google Colab (link)

!pip install coremltools
!pip install opencv-python
!pip3 install openmim
!mim install mmcv-full

# clone mmdeploy to get the deployment config. `--recursive` is not necessary
!git clone https://github.com/open-mmlab/mmdeploy.git
%cd mmdeploy
!pip install -v -e .
%cd ..

# clone mmdetection repo. We have to use the config file to build PyTorch nn module
!git clone https://github.com/open-mmlab/mmdetection.git
%cd mmdetection
!pip install -v -e .
%cd ..

# download checkpoint
!wget -P checkpoints https://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r18_fpn_1x_coco/retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth

# run the command to start model conversion
!python mmdeploy/tools/deploy.py \
    mmdeploy/configs/mmdet/detection/detection_coreml_static-800x1344.py \
    mmdetection/configs/retinanet/retinanet_r18_fpn_1x_coco.py \
    checkpoints/retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth \
    mmdetection/demo/demo.jpg \
    --work-dir mmdeploy_model/retina \
    --device cpu \
    --dump-info

Error

Traceback (most recent call last):
  File "mmdeploy/tools/deploy.py", line 308, in <module>
    main()
  File "mmdeploy/tools/deploy.py", line 129, in main
    export2SDK(
  File "/content/mmdeploy/mmdeploy/backend/sdk/export_info.py", line 456, in export2SDK
    deploy_info = get_deploy(deploy_cfg, model_cfg, work_dir, device)
  File "/content/mmdeploy/mmdeploy/backend/sdk/export_info.py", line 376, in get_deploy
    models = get_models(deploy_cfg, model_cfg, work_dir, device)
  File "/content/mmdeploy/mmdeploy/backend/sdk/export_info.py", line 148, in get_models
    from mmdeploy.backend.coreml import get_model_suffix
  File "/content/mmdeploy/mmdeploy/backend/coreml/__init__.py", line 12, in <module>
    from . import ops
  File "/content/mmdeploy/mmdeploy/backend/coreml/ops.py", line 28, in <module>
    def log2(context, node):
  File "/usr/local/lib/python3.8/dist-packages/coremltools/converters/mil/frontend/torch/torch_op_registry.py", line 58, in register_torch_op
    return func_wrapper(_func)
  File "/usr/local/lib/python3.8/dist-packages/coremltools/converters/mil/frontend/torch/torch_op_registry.py", line 42, in func_wrapper
    raise ValueError("Torch op {} already registered.".format(f_name))
ValueError: Torch op log2 already registered.

check_env

No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
/usr/local/lib/python3.8/dist-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
  warnings.warn(
2023-02-04 16:44:15,345 - mmdeploy - INFO - 

2023-02-04 16:44:15,345 - mmdeploy - INFO - **********Environmental information**********
fatal: not a git repository (or any of the parent directories): .git
2023-02-04 16:44:15,711 - mmdeploy - INFO - sys.platform: linux
2023-02-04 16:44:15,712 - mmdeploy - INFO - Python: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0]
2023-02-04 16:44:15,712 - mmdeploy - INFO - CUDA available: False
2023-02-04 16:44:15,712 - mmdeploy - INFO - GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
2023-02-04 16:44:15,712 - mmdeploy - INFO - PyTorch: 1.13.1+cu116
2023-02-04 16:44:15,712 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.6, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

2023-02-04 16:44:15,712 - mmdeploy - INFO - TorchVision: 0.14.1+cu116
2023-02-04 16:44:15,712 - mmdeploy - INFO - OpenCV: 4.6.0
2023-02-04 16:44:15,712 - mmdeploy - INFO - MMCV: 1.7.1
2023-02-04 16:44:15,712 - mmdeploy - INFO - MMCV Compiler: GCC 9.3
2023-02-04 16:44:15,712 - mmdeploy - INFO - MMCV CUDA Compiler: 11.6
2023-02-04 16:44:15,712 - mmdeploy - INFO - MMDeploy: 0.12.0+
2023-02-04 16:44:15,712 - mmdeploy - INFO - 

2023-02-04 16:44:15,712 - mmdeploy - INFO - **********Backend information**********
2023-02-04 16:44:15,726 - mmdeploy - INFO - tensorrt:	None
2023-02-04 16:44:15,729 - mmdeploy - INFO - ONNXRuntime:	None
2023-02-04 16:44:15,730 - mmdeploy - INFO - pplnn:	None
2023-02-04 16:44:15,734 - mmdeploy - INFO - ncnn:	None
2023-02-04 16:44:15,738 - mmdeploy - INFO - snpe:	None
2023-02-04 16:44:15,740 - mmdeploy - INFO - openvino:	None
2023-02-04 16:44:15,745 - mmdeploy - INFO - torchscript:	1.13.1+cu116
2023-02-04 16:44:15,745 - mmdeploy - INFO - torchscript custom ops:	NotAvailable
2023-02-04 16:44:15,875 - mmdeploy - INFO - rknn-toolkit:	None
2023-02-04 16:44:15,875 - mmdeploy - INFO - rknn2-toolkit:	None
2023-02-04 16:44:15,878 - mmdeploy - INFO - ascend:	None
2023-02-04 16:44:19,423 - mmdeploy - INFO - coreml:	6.2
INFO:mmdeploy:coreml:	6.2
2023-02-04 16:44:19,426 - mmdeploy - INFO - tvm:	None
INFO:mmdeploy:tvm:	None
2023-02-04 16:44:19,426 - mmdeploy - INFO - 

INFO:mmdeploy:

2023-02-04 16:44:19,426 - mmdeploy - INFO - **********Codebase information**********
INFO:mmdeploy:**********Codebase information**********
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmdet:	2.28.1
INFO:mmdeploy:mmdet:	2.28.1
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmseg:	None
INFO:mmdeploy:mmseg:	None
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmcls:	None
INFO:mmdeploy:mmcls:	None
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmocr:	None
INFO:mmdeploy:mmocr:	None
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmedit:	None
INFO:mmdeploy:mmedit:	None
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmdet3d:	None
INFO:mmdeploy:mmdet3d:	None
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmpose:	None
INFO:mmdeploy:mmpose:	None
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmrotate:	None
INFO:mmdeploy:mmrotate:	None
2023-02-04 16:44:19,428 - mmdeploy - INFO - mmaction:	None
INFO:mmdeploy:mmaction:	None

@grimoire
Copy link
Member

grimoire commented Feb 5, 2023

@JohannesBauer97 Try comment the function below.

def log2(context, node):
.

@JohannesBauer97
Copy link

JohannesBauer97 commented Feb 5, 2023

// Update:
I tried with:
pytorch==1.13.1
torchvision=0.14.1
mmcv-full==1.7.1
coremltools==6.2

and

pytorch==1.12.1
torchvision=0.13.1
mmcv-full==1.7.0
coremltools==6.1

And got the same error messages which I posted in the original comment below.
@grimoire could you try once to convert a model to coreml and if it works send the check_env output to compare our environments ?

// Original:
@grimoire Then I receive a new similar error: Torch op coreml_nms already registered.
When commenting out the nms op, another issue is raised (see below).
I'll try to downgrade coremltools, it might be an issue with yesterday released version...
https://github.com/apple/coremltools/releases/tag/6.2

Traceback (most recent call last):
  File "mmdeploy/tools/deploy.py", line 308, in <module>
    main()
  File "mmdeploy/tools/deploy.py", line 232, in main
    backend_files = to_backend(
  File "/Users/joba/Documents/Data Science/mmlab/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "/Users/joba/Documents/Data Science/mmlab/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
    return self.call_function_local(func_name, *args, **kwargs)
  File "/Users/joba/Documents/Data Science/mmlab/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
    return pipe_caller(*args, **kwargs)
  File "/Users/joba/Documents/Data Science/mmlab/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/Users/joba/Documents/Data Science/mmlab/mmdeploy/mmdeploy/apis/utils/utils.py", line 95, in to_backend
    return backend_mgr.to_backend(
  File "/Users/joba/Documents/Data Science/mmlab/mmdeploy/mmdeploy/backend/coreml/backend_manager.py", line 109, in to_backend
    from_torchscript(
  File "/Users/joba/Documents/Data Science/mmlab/mmdeploy/mmdeploy/backend/coreml/torchscript2coreml.py", line 95, in from_torchscript
    mlmodel = ct.convert(
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 444, in convert
    mlmodel = mil_convert(
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 187, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 211, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 281, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 109, in __call__
    return load(*args, **kwargs)
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 57, in load
    return _perform_torch_convert(converter, debug)
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 104, in _perform_torch_convert
    raise e
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 96, in _perform_torch_convert
    prog = converter.convert()
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 281, in convert
    convert_nodes(self.context, self.graph)
  File "/Users/joba/miniforge3/envs/mmlab/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 84, in convert_nodes
    raise RuntimeError(
RuntimeError: PyTorch convert function for op 'mmdeploy::coreml_nms' not implemented.

@grimoire
Copy link
Member

grimoire commented Feb 6, 2023

2023-02-06 10:41:16,426 - mmdeploy - INFO - **********Backend information**********
2023-02-06 10:41:16,438 - mmdeploy - INFO - tensorrt:	None
2023-02-06 10:41:16,440 - mmdeploy - INFO - ONNXRuntime:	None
2023-02-06 10:41:16,441 - mmdeploy - INFO - pplnn:	None
2023-02-06 10:41:16,445 - mmdeploy - INFO - ncnn:	None
2023-02-06 10:41:16,448 - mmdeploy - INFO - snpe:	None
2023-02-06 10:41:16,449 - mmdeploy - INFO - openvino:	None
2023-02-06 10:41:16,452 - mmdeploy - INFO - torchscript:	1.10.2
2023-02-06 10:41:16,452 - mmdeploy - INFO - torchscript custom ops:	Available
2023-02-06 10:41:16,480 - mmdeploy - INFO - rknn-toolkit:	None
2023-02-06 10:41:16,480 - mmdeploy - INFO - rknn2-toolkit:	None
2023-02-06 10:41:16,483 - mmdeploy - INFO - ascend:	None
2023-02-06 10:41:17,036 - mmdeploy - INFO - coreml:	6.0b1
2023-02-06 10:41:18,016 - mmdeploy - INFO - tvm:	0.10.dev714+gd4bf9ecf5=

The log2 converter is added by ... me in coreml. It should be ignored in the latest vesion. We will fix it.
coreml_nms is a PyTorch Custom op which did nothing except mapping the nms in coreml to the one used in mmdetection. I guess the latest coreml register an op with the same name. You can rename the op in

TORCH_LIBRARY_IMPL(mmdeploy, CPU, m) { m.impl("coreml_nms", coreml_nms_cpu); }

"coreml_nms(Tensor boxes, Tensor scores, float iou_threshold, "

coreml_nms = torch.ops.mmdeploy.coreml_nms

and
def coreml_nms(context, node):

See if the conversion works. Or just downgrade the coreml.

@JohannesBauer97
Copy link

I'll give it a try as soon as I get the time for it, I guess within this week.
And probably I'll create a separate issue to not blow up your PR here (but link each other).

Thanks so far

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants