Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get fuzzy borders if an input image is of alpha channel #149

Open
Rongronggg9 opened this issue Apr 29, 2021 · 3 comments
Open

get fuzzy borders if an input image is of alpha channel #149

Rongronggg9 opened this issue Apr 29, 2021 · 3 comments
Labels
bug Something isn't working

Comments

@Rongronggg9
Copy link

Original Image waifu2x-ncnn-vulkan waifu2x-caffe
1x / /
2x [cunet, denoise-level 3, TTA] /
2x*2x [cunet, denoise-level 3, TTA] /
Original Image (alpha channel deleted) waifu2x-ncnn-vulkan
1x /
2x [cunet, denoise-level 3, TTA] /
2x*2x [cunet, denoise-level 3, TTA] /

Have tested [cunet, denoise-level -1, no TTA] and got the same fuzzy borders.

@nihui nihui added the bug Something isn't working label Apr 29, 2021
@Rongronggg9
Copy link
Author

Rongronggg9 commented Jul 24, 2022

Still reproducible on release 20220419:

-s 2 -n 3 -x:
2x3n

-s 2 -n 3 -x*2:
2x3n2x3n

@nagadomi
Copy link

This is not a request, just information.

waifu2x(web) and waifu2x-caffe use a preprocessing method for padding the borders of the alpha channel.
alpha2_preprocess

https://github.com/nagadomi/nunif/blob/eab6952d93e85951ed4e4cff30cd26c09e1dbb63/nunif/utils/render.py#L35
https://github.com/lltcggie/waifu2x-caffe/blob/3812b90f68b33256f040e6c8eafaacebd51490a0/common/stImage.cpp#L167

Also the alpha channel is 2x by the scale2.0 model.
This takes more processing time, but it gives better alpha border results, such as for a sprite image with alpha channel in game-dev.

@chrisjbillington
Copy link

To work around this issue, I've been compositing the original image over two differently-coloured solid backgrounds, upscaling those two alpha-free images separately, then doing some pixel maths on the upscaled images to recover the alpha channel and invert the composition to obtain an upscaled image with alpha. E.g. with imagemagick:

# Compose on two solid backgrounds:
convert orig.png -background green1 -flatten green.png
convert orig.png -background magenta -flatten magenta.png

# Upscale:
waifu2x-ncnn-vulkan -s 2 -n 3 -x -i green.png -o green-2x.png
waifu2x-ncnn-vulkan -s 2 -n 3 -x -i magenta.png -o magenta-2x.png

# Extract alpha
magick green-2x.png magenta-2x.png -compose difference -composite -separate \
  -evaluate-sequence max -auto-level -negate alpha-2x.png

# Invert componsition of both 2x images:
magick green-2x.png alpha-2x.png -alpha Off \
  -fx "v==0 ? 0 : u/v - green1/v + green1" alpha-2x.png -compose Copy_Opacity \
  -composite green-2x-decomposed.png

magick magenta-2x.png alpha-2x.png -alpha Off \
  -fx "v==0 ? 0 : u/v - magenta/v + magenta" alpha-2x.png -compose Copy_Opacity \
  -composite magenta-2x-decomposed.png

# Average the two decomposed images together
convert -average green-2x-decomposed.png magenta-2x-decomposed.png result.png

There is the slightest magenta and green tinge around some of the semitransparent edges, but averaging together the two decomposed images instead of just using one of them improves it, and it's not too bad.

Obviously this takes about twice as long since it has to upscale the image twice, but it's what I'm using for now.

orig.png
orig

green.png
green

magenta.png
magenta

green-2x.png
green-2x

magenta-2x.png
magenta-2x

green-2x-decomposed.png
green-2x-decomposed

magenta-2x-decomposed.png
magenta-2x-decomposed

result.png
result

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants