-
Notifications
You must be signed in to change notification settings - Fork 630
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error converting from PyTorch to CoreML #751
Comments
Thanks @lp55 for reporting the issue. We are looking into it. In the meanwhile, can you please try converting this model with |
Hi, On the example code I gave, using pytorch==1.5 it worked (well I didn't actually tested the produced mode, but the convertion process was concluded), but on my trained network I got the following error: RuntimeError: PyTorch convert function for op upsample_nearest2d not implemented What upsample op is avaliable for pytorch conversion? I can change my training code to match it. |
I read the coremltools convertion code and it seems bilinear upsampling is supported. I'll train with that and report the results afterwards. |
Hi @DawerG So here's the output after using the current master of coremltools: Converting Frontend ==> MIL Ops: 24%|██████████▋ | 183/750 [00:00<00:01, 435.77 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision! I used float16 when possible during training to enable larger batch sizes. But I don't know why it's trying to convert float16 to i8 during this conversion process. Also why is producing this incompatible shapes?? |
I'm trying to convert a U-net-like model and I'm getting both |
I've a similar error converting a PyTorch CNN/GAN model (https://github.com/SystemErrorWang/FacialCartoonization) even using 1.5.1. Using tracing I've basically the same ValueError: Incompatible dim 2 in shapes (1, 32, -128, -128) vs. (1, 32, 128, 128) on Converting Frontend to MIL. Using scripting I'm experiencing instead this other RuntimeError: |
Just tested with PyTorch 1.6 and CoreMLTools 4.0.b3 and I got exactly same errors above with both tracing and scripting. |
Same thing for me. PyTorch 1.6 and CoreMLTools 4.0.b3 and I still got the same error. |
When I use the original UNet I get a different error. Here's an example code:
At first I got an error complaning about numpy.intc type. To fix that I changed coremltools\converters\mil\mil\types\type_mapping.py:201 from this:
to this:
After that I run again and got this error: RuntimeError: PyTorch convert function for op 'constant_pad_nd' not implemented. |
same issue. any fix? |
Are you facing the JIT pass issue? PyTorch 1.6 has been supported since |
PyTorch 1.6 has been supported in |
Have the same issue PyTorch convert function for op 'constant_pad_nd' not implemented convertion effnetlite pytorch 1.6.0
|
@lp55 @ZackPashkin @1duo
|
I'm trying to convert a UNet model from pytorch to coreml and I'm getting the following error:
I'm using pytorch nightly and coremltools 4.0b1 on Windows. Here's a simple code to test this:
Any ideas why this code gets that error? There are no special layers, and UNet ops are pretty standard.
Oh yeah, if I try to set the outputs parameters I get this exception: ValueError: outputs must not be specified for PyTorch. Any idea when this will be enabled?
I appreciate any help.
The text was updated successfully, but these errors were encountered: