waifu2x on iOS
- XCode 9+
- iOS 11+
RGB color space works fine. Others should be converted to
RGB before processing otherwise output image will be broken.
Alpha channel is scaled using bicubic interpolation. Generally it runs on GPU. It automatically falls back to CPU if image is too large for Metal to process, which is extremely slow. (A bad idea)
This repository includes all the models converted from waifu2x-caffe. If you want to dig into Core ML, it is recommended that you should convert them by yourself.
You can use the same method described in MobileNet-CoreML. You should not specify any input and output layer in python script.
A working model should have input and output like the following example:
- iPhone6s - waifu2x-ios on iPhone 6s with iOS 11.1
- iPhone8 - waifu2x-ios on iPhone 8 with iOS 11.0
- iPad - waifu2x-ios on iPad Pro 10.5 with iOS 11.1
- PC - waifu2x-caffe on Windows 10 16278 with GTX 960M
All of the tests are running
denoise level 2 with
scale 2x model on anime-style images from Pixiv.
|Before using upconv models||141.7||1.86|
|After using upconv models||63.6||1.28|
|After adding pipeline on output||56.8||1.28|
|After adding pipeline on prediction||49.2||0.38|
|Pure MPSCNN implementation*||29.6||1.06|
*: With crop size of 384 and double command buffers.