-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about 16 bit rendering #25
Comments
I can't replicate the Ragg artifacts in Context Free. I did notice a slight difference between how the two programs use AGG. Context Free draws using agg::render_scanlines() and the rendering class agg::renderer_scanline_aa_solid<agg::renderer_base<pixel_fmt>>. Ragg uses agg::render_scanlines_aa_solid() and the rendering class agg::renderer_base<pixfmt_type>. I modified Context Free to do the same but it didn't result in aliasing artifacts. It is interesting that 'mpg' and 'gear' do not appear to have artifacts, but the numeric text does. You should draw them in gray to confirm this. It seems to me that black colors have artifacts too. But the artifacts are also black, so they are less obvious. The images that you provide have a lot of things in them. Can you show the issue if R draws a single gray rectangle? I think it would be useful to see the simplest image that has artifacts. |
I just noticed a major difference between Context Free and Ragg. Context Free uses colors pre-multiplied with alpha. Blending premultiplied colors is much more straight-forward than non-premultiplied colors. Ragg uses pixel formats that assume non-premultiplied inputs and a premultiplied rendering buffer, which I don't think is the case. I think that you should either use the premultiplied pixel formats or the plain pixel formats. I suggest that you change Ragg to use only the premultiplied colors pixel formats: agg::pixfmt_rgbXX/agg::pixfmt_rgbaXX. Premultiplied color is the best way to do accurately blended graphics. If the raster images that you are drawing have an alpha channel and are not premultiplied then you will need to premultiply them. PNG files use non-premultiplied colors. You will need to demultiply each row and write it out to the PNG file. You can't just write out the whole rendering buffer like you do now. See how Context Free does this. |
Thank you so much for taking the time to look at this. You pointing me towards premultiplying led me to this post, describing the type of artefacts I see:
From http://amindforeverprogramming.blogspot.com/2013/07/why-alpha-premultiplied-colour-blending.html?m=1 I already tried switching to premultiplied values before writing here, but I now believe I did so incorrectly, based on your reply and the quoted article. I’ll give it a stab again tonight and see if I can get it to work properly |
Ok, so I have tried in any way, known to me, to get alpha blending to work (because it is a general alpha blending problem, not something to do with antialiasing specifically). Drawing an opaque rectangle ( Shifting the transparency down a single step ( Taking it down some more ( I've switched the pixel formatters to premultiplying, and demultiply before writing the png. I've also tried to change the rendering pipeline to the one you use in Context Free, calling There is def something wonky going on in the blending step, but I'm unable to figure out if it is anywhere in my code (and why it would only kick in, in 16bit mode) |
I did notice an error in AGG that may kick in at 16 bit. In agg_color_rga.h there is a key function used in blending:
a & b are 16-bit unsigned ints and the math is supposed to be done as 32-bit unsigned ints. I checked this function against all 4 billion inputs when I wrote it several years ago. But technically the spec allows for doing a*b as 16-bit unsigned ints and a modern compiler might vectorize the code to do just that. The code should really be
Look for all of the multiply() functions in agg_color_XXX.h and apply this change. |
Actually, there are quite a few missing upcasts in agg_color_XXX.h |
Thx - will try! |
It seems surprising that this has gone unnoticed until now if it is indeed the problem |
Upcasting multiply didn't solve the issue unfortunately |
if I shortcut all scanline code and simply use the renderer to fill the colour over a white background I get the same results as filling a rectangle, so this is def something that happens at the pixel format layer... |
If the problem is with anti-aliasing of 16-bit colors then it has to do with arithmetic involving 16-bit color channels and 8-bit cover values. The artifacts don't occur with 8-bit color because the color channels and cover values are both 8-bit. The reason missing upcasts may have gone unnoticed is that without auto-vectorization by the compiler, the arithmetic would all be done in 32 or 64 bit registers. I noticed some missing upcasts involving cover values. I think that I fixed them all in the attached files. |
All this time, and then it ends up simply being me who can't convert properly between little and big endian... As colours were provided as 8bit the endian-ness of it didn't matter when the colour was pure, but once antialiasing and blending kicked in it mattered a lot (obviously)... I just didn't think to look at my conversion because it so much appeared to be a blending issue... Sorry for taking your time with this. I greatly appreciate your help and generosity! |
Thanks for closing the issue. It turns out I was wrong about the need for upcasts. The C++ spec says that integer types that are smaller than int are always promoted to int or unsigned int before arithmetic operations. |
First of all, sorry for the noise on this channel. It is not easy to find people with AGG experience, so I'll resort to ask a non-Context free question here. It goes without saying that you can simply close this if you do not have time to answer.
I'm currently in the process of building a graphic device for use with the R programming language, based on AGG. It is going well and everything works if I stick to an internal 8bit colour representation. However, if I change to 16bit I begin to get weird blending artefacts for non-black colours and I can't figure out why they appear:
The following is correctly rendered and in 8-bit:
This is the exact same render but with 16-bit:
As can be seen, the colours are fine in solid areas, but once blending of antialiased part are done it seems to make everything darker...
My development lives at https://github.com/thomasp85/ragg - If you have any input I'll be forever grateful
The text was updated successfully, but these errors were encountered: