Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image decay on the new A12 Bionic chip #8

Open
imxieyi opened this issue Oct 15, 2018 · 6 comments

Comments

@imxieyi
Copy link
Owner

commented Oct 15, 2018

I have received several reports that the image quality will decay on iPhone XS. The following is an example (pay attention to the color gradient):
Before processing:
image1
After processing:
image2
Result on iPhone 8 for comparison:
image3
The issue should come from the new neural engine on A12 chip. If this can be confirmed, neural engine should be disabled in this app. For details, it might because of one of the following reasons:

  1. Float16 numbers are cast to Float8 or lower precision before sending to neural engine.
  2. The model is somehow "optimized" by Core ML framework for better performance on the neural engine.

If this cannot be solved, I will consider rewriting the whole library in MPSNN which can only run on the GPU. It will also significantly increase the size of the app.

Since I don't have any iPhone XS/XS Max, I cannot debug this problem for now. Thanks ahead for anyone who can help investigate into this issue!

@sfslowfoodie

This comment has been minimized.

Copy link

commented Jan 1, 2019

I can confirm the image quality issue on my XS 256GB. Artifacts are visible on color gradients, the final images are unusable.
Running in CPU only mode quality is good as expected, however the XS takes about twice as much time to complete a task when compared to a 7 Plus.
I am available for trying out new builds.

@imxieyi

This comment has been minimized.

Copy link
Owner Author

commented Jan 1, 2019

Turned out that setting computeUnits in MLModelConfiguration to cpuAndGPU can bypass neural engine for now.

The MPSNN implementation is partially finished. It will be released if it can be faster or results in better quality than Core ML in benchmarks.

In the future if I could get hands on a device with A12 chip, I still wish to try fixing for neural engine since it should be much faster even than GPU mode.

@sfslowfoodie

This comment has been minimized.

Copy link

commented Jan 8, 2019

With regard to the waifu2x app in the App Store (which is what I am testing), is/will this app rely on both cpu And GPU to run the Core ML model, then?

@imxieyi

This comment has been minimized.

Copy link
Owner Author

commented Jan 8, 2019

With regard to the waifu2x app in the App Store (which is what I am testing), is/will this app rely on both cpu And GPU to run the Core ML model, then?

Currently it only has Core ML as back end, which relys on GPU. If GPU is not supported it will fallback to CPU.

@sfslowfoodie

This comment has been minimized.

Copy link

commented Jan 8, 2019

What bugs me (what I don’t understand since I am not an expert on this matter) is why the performance on iPhone 7 Plus (A10 SOC, no neural engine) is still ~2X faster than on XS, excluding neural engine factors.
Running both iPhone models in Cpu Only mode, I see the A12 benefits (~35% faster cpu than the A10 cpu).
Disabling the Cpu Only however, the latest XS app still generates artifacts, implying it is still trying to use the neural engine, instead of relying on the GPU (as the A10 does).

@imxieyi

This comment has been minimized.

Copy link
Owner Author

commented Jan 8, 2019

Weird... The model.configuration.computeUnits should have been set to .cpuAndGPU. It seems that this fix is ineffective. I will try to get hands on a device with A12 chip. Also a MPSNN update will be considered before then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.