Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about 16 bit rendering #25

Closed
thomasp85 opened this issue Mar 28, 2019 · 13 comments
Closed

Question about 16 bit rendering #25

thomasp85 opened this issue Mar 28, 2019 · 13 comments

Comments

@thomasp85
Copy link

First of all, sorry for the noise on this channel. It is not easy to find people with AGG experience, so I'll resort to ask a non-Context free question here. It goes without saying that you can simply close this if you do not have time to answer.

I'm currently in the process of building a graphic device for use with the R programming language, based on AGG. It is going well and everything works if I stick to an internal 8bit colour representation. However, if I change to 16bit I begin to get weird blending artefacts for non-black colours and I can't figure out why they appear:

The following is correctly rendered and in 8-bit:

Rplot8bit

This is the exact same render but with 16-bit:

Rplot16bit

As can be seen, the colours are fine in solid areas, but once blending of antialiased part are done it seems to make everything darker...

My development lives at https://github.com/thomasp85/ragg - If you have any input I'll be forever grateful

@MtnViewJohn
Copy link
Owner

I can't replicate the Ragg artifacts in Context Free. I did notice a slight difference between how the two programs use AGG. Context Free draws using agg::render_scanlines() and the rendering class agg::renderer_scanline_aa_solid<agg::renderer_base<pixel_fmt>>. Ragg uses agg::render_scanlines_aa_solid() and the rendering class agg::renderer_base<pixfmt_type>. I modified Context Free to do the same but it didn't result in aliasing artifacts.

It is interesting that 'mpg' and 'gear' do not appear to have artifacts, but the numeric text does. You should draw them in gray to confirm this. It seems to me that black colors have artifacts too. But the artifacts are also black, so they are less obvious.

The images that you provide have a lot of things in them. Can you show the issue if R draws a single gray rectangle? I think it would be useful to see the simplest image that has artifacts.

@MtnViewJohn
Copy link
Owner

I just noticed a major difference between Context Free and Ragg. Context Free uses colors pre-multiplied with alpha. Blending premultiplied colors is much more straight-forward than non-premultiplied colors. Ragg uses pixel formats that assume non-premultiplied inputs and a premultiplied rendering buffer, which I don't think is the case. I think that you should either use the premultiplied pixel formats or the plain pixel formats.

I suggest that you change Ragg to use only the premultiplied colors pixel formats: agg::pixfmt_rgbXX/agg::pixfmt_rgbaXX. Premultiplied color is the best way to do accurately blended graphics. If the raster images that you are drawing have an alpha channel and are not premultiplied then you will need to premultiply them. PNG files use non-premultiplied colors. You will need to demultiply each row and write it out to the PNG file. You can't just write out the whole rendering buffer like you do now. See how Context Free does this.

@thomasp85
Copy link
Author

Thank you so much for taking the time to look at this. You pointing me towards premultiplying led me to this post, describing the type of artefacts I see:

Another tangentially related problem is that some image programs set transparent pixels to all-zeroes on all channels. Some compressed texture formats require this as well. This causes ugly dark halos on partially transparent objects when they get linear filtered; the filter will lerp between transparent black pixels and the correct colour, making it get darker and more transparent at the same time. Not good.

From http://amindforeverprogramming.blogspot.com/2013/07/why-alpha-premultiplied-colour-blending.html?m=1

I already tried switching to premultiplied values before writing here, but I now believe I did so incorrectly, based on your reply and the quoted article. I’ll give it a stab again tonight and see if I can get it to work properly

@thomasp85
Copy link
Author

Ok, so I have tried in any way, known to me, to get alpha blending to work (because it is a general alpha blending problem, not something to do with antialiasing specifically).

Drawing an opaque rectangle (#4682B4FF -> steelblue) on a white background yields the expected result:

Rplot001

Shifting the transparency down a single step (#4682B4FE), gives something quite different:

Rplot001

Taking it down some more (#4682B4FA), becomes even weirder:

Rplot001

I've switched the pixel formatters to premultiplying, and demultiply before writing the png. I've also tried to change the rendering pipeline to the one you use in Context Free, calling agg::render_scanlines() etc., without it having an effect on the output...

There is def something wonky going on in the blending step, but I'm unable to figure out if it is anywhere in my code (and why it would only kick in, in 16bit mode)

@MtnViewJohn
Copy link
Owner

MtnViewJohn commented Mar 31, 2019

I did notice an error in AGG that may kick in at 16 bit. In agg_color_rga.h there is a key function used in blending:

    // Fixed-point multiply, exact over int16u.
    static AGG_INLINE value_type multiply(value_type a, value_type b) 
    {
        calc_type t = a * b + base_MSB;
        return value_type(((t >> base_shift) + t) >> base_shift);
    }

a & b are 16-bit unsigned ints and the math is supposed to be done as 32-bit unsigned ints. I checked this function against all 4 billion inputs when I wrote it several years ago. But technically the spec allows for doing a*b as 16-bit unsigned ints and a modern compiler might vectorize the code to do just that. The code should really be

    // Fixed-point multiply, exact over int16u.
    static AGG_INLINE value_type multiply(value_type a, value_type b) 
    {
        calc_type t = (calc_type)a * (calc_type)b + base_MSB;
        return value_type(((t >> base_shift) + t) >> base_shift);
    }

Look for all of the multiply() functions in agg_color_XXX.h and apply this change.

@MtnViewJohn
Copy link
Owner

Actually, there are quite a few missing upcasts in agg_color_XXX.h

@thomasp85
Copy link
Author

Thx - will try!

@thomasp85
Copy link
Author

It seems surprising that this has gone unnoticed until now if it is indeed the problem

@thomasp85
Copy link
Author

Upcasting multiply didn't solve the issue unfortunately

@thomasp85
Copy link
Author

if I shortcut all scanline code and simply use the renderer to fill the colour over a white background I get the same results as filling a rectangle, so this is def something that happens at the pixel format layer...

@MtnViewJohn
Copy link
Owner

MtnViewJohn commented Apr 1, 2019

If the problem is with anti-aliasing of 16-bit colors then it has to do with arithmetic involving 16-bit color channels and 8-bit cover values. The artifacts don't occur with 8-bit color because the color channels and cover values are both 8-bit. The reason missing upcasts may have gone unnoticed is that without auto-vectorization by the compiler, the arithmetic would all be done in 32 or 64 bit registers.

I noticed some missing upcasts involving cover values. I think that I fixed them all in the attached files.

aggcolor.zip

@thomasp85
Copy link
Author

All this time, and then it ends up simply being me who can't convert properly between little and big endian... As colours were provided as 8bit the endian-ness of it didn't matter when the colour was pure, but once antialiasing and blending kicked in it mattered a lot (obviously)... I just didn't think to look at my conversion because it so much appeared to be a blending issue...

Sorry for taking your time with this. I greatly appreciate your help and generosity!

@MtnViewJohn
Copy link
Owner

Thanks for closing the issue. It turns out I was wrong about the need for upcasts. The C++ spec says that integer types that are smaller than int are always promoted to int or unsigned int before arithmetic operations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants