-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Cannot use Core ML conversion pipeline on versions >= 0.11.0 #1652
Comments
Thanks for the notification and sorry for making you trouble. |
Same environment from before, but using your patch I get the following stack trace: 2023-01-13 13:35:36,529 - mmdeploy - INFO - Save PyTorch model: /Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/end2end.pt.
2023-01-13 13:35:36,620 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2torchscript.torch2torchscript
2023-01-13 13:35:36,759 - mmdeploy - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in main process
Traceback (most recent call last):
File "libs/mmdeploy/tools/deploy.py", line 308, in <module>
main()
File "libs/mmdeploy/tools/deploy.py", line 232, in main
backend_files = to_backend(
File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
return self.call_function(func_name_, *args, **kwargs)
File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
return self.call_function_local(func_name, *args, **kwargs)
File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
return pipe_caller(*args, **kwargs)
File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/apis/utils/utils.py", line 95, in to_backend
return backend_mgr.to_backend(
File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/backend/coreml/backend_manager.py", line 83, in to_backend
from .torchscript2coreml import from_torchscript, get_model_suffix
File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/backend/coreml/__init__.py", line 13, in <module>
from .torchscript2coreml import get_model_suffix
File "/Users/typically/Workspace/vbti/PlantMorphology/tools/deploy/libs/mmdeploy/mmdeploy/backend/coreml/torchscript2coreml.py", line 52, in <module>
input_names: list[str],
TypeError: 'type' object is not subscriptable |
@Typiqally updated please try again. |
Thank you @grimoire, it works now. I haven't tested the model completely, but the visualization from the deployment shows that it is working as expected. |
Hi @grimoire I would be happy to see your I'm running everything on Google Colab (link) !pip install coremltools
!pip install opencv-python
!pip3 install openmim
!mim install mmcv-full
# clone mmdeploy to get the deployment config. `--recursive` is not necessary
!git clone https://github.com/open-mmlab/mmdeploy.git
%cd mmdeploy
!pip install -v -e .
%cd ..
# clone mmdetection repo. We have to use the config file to build PyTorch nn module
!git clone https://github.com/open-mmlab/mmdetection.git
%cd mmdetection
!pip install -v -e .
%cd ..
# download checkpoint
!wget -P checkpoints https://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r18_fpn_1x_coco/retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth
# run the command to start model conversion
!python mmdeploy/tools/deploy.py \
mmdeploy/configs/mmdet/detection/detection_coreml_static-800x1344.py \
mmdetection/configs/retinanet/retinanet_r18_fpn_1x_coco.py \
checkpoints/retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth \
mmdetection/demo/demo.jpg \
--work-dir mmdeploy_model/retina \
--device cpu \
--dump-info Error
check_env
|
@JohannesBauer97 Try comment the function below. mmdeploy/mmdeploy/backend/coreml/ops.py Line 28 in b85f341
|
// Update: and pytorch==1.12.1 And got the same error messages which I posted in the original comment below. // Original:
|
The log2 converter is added by ... me in coreml. It should be ignored in the latest vesion. We will fix it.
and
See if the conversion works. Or just downgrade the coreml. |
I'll give it a try as soon as I get the time for it, I guess within this week. Thanks so far |
Checklist
Describe the bug
I am attempting to update MMDeploy from version 0.10.0 to the latest version 0.12.0. However, this causes the Core ML conversion pipeline to break giving an unknown error (see stack trace section). I'm using exactly the same dependencies that I've used in version 0.10.0, which worked perfectly.
I've also tested version 0.11.0, and can conclude that everything after version 0.10.0 breaks the Core ML conversion pipeline. Furthermore, I'm not exactly sure which commit caused this issue, but I believe the breaking change is somewhere between version 0.10.0 and 0.11.0.
Interesting to note is that the check_env.py script does not show that Core ML is available, even though the Core ML tools package is installed and functional.
Reproduction
Environment
Error traceback
The text was updated successfully, but these errors were encountered: