Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Antialiasing: potential improvement #120

Open
mindhells opened this issue Jun 4, 2020 · 4 comments
Open

Antialiasing: potential improvement #120

mindhells opened this issue Jun 4, 2020 · 4 comments

Comments

@mindhells
Copy link
Member

mindhells commented Jun 4, 2020

Here's what I understand of how antialiasing is working:
For every pixel (which is an area, not a point, within the complex plane) we take the center value (in the complex plane) to calculate it's fate, index, and color. This would be the 1st pass, no antialiasing yet.
In the 2nd pass, for every pixel (remember is an area), we divide it into 4 areas, called subpixels, and calculate the corresponding color for the center of each. Then we assign the whole pixel the average color of the 4 subpixels (aggregate every color channel and divide by 4).
Assuming no other improvements in place, this is 4x the cost of calculating the original image but also some extra space is needed: the image class, which is holding the pixel color buffer, also holds some subpixel information (fates and indexes)... I haven't found how it reuses this information, although it seems it has the intention.

I have no background in image processing, so maybe I'm missing something, but given this is not a traditional antialiasing algorithm (it's more like smoothing) I'm wondering if the following improvement is possible (and by the way make use of the subpixel information buffer):
When you divide the pixel in 4 subpixel, instead of calculating the color corresponding to the center of each, calculate the outer vertex. This would mean that adjacent pixels would share common subpixels, reducing the total amount of calculations ( (x+1)*(y+1) to be more precise which is far less than the current x*y*4).
Not sure how this would affect the final result, but since the subpixels would be farther from the center ... I hope it's smoother.

I'm only considering the best antialiasing mode in this explanation. There's another mode called fast which prevents part of the calculations based on adjacent pixels likeness.

@edyoung
Copy link
Member

edyoung commented Jun 6, 2020

That would be cheaper, but I think the effect is a more like blur than subsampling. You wind up computing another grid offset by half a pixel from the pixel centers, then average 4 of them to calculate a pixel center. It will look smoother than not doing any averaging but the current approach will provide more detail. But feel free to try it

@mindhells
Copy link
Member Author

I see... I guess I should look for areas of the image with more "entropy" to find out how it really works compared with the current approach.
I think it should be easy to implement it so I will try it... have to think about how to measure the effect though.
On the other hand, do you remeber how you ended up with the current approach? I mean, Is subsampling more suitable because of the nature of fractal images? performance reasons? ...

@edyoung
Copy link
Member

edyoung commented Jun 6, 2020

I wouldn't claim the current approach is based on very rigorous theory. Essentially we calculate at a higher resolution and average the results. But a different arrangement of samples could provide better speed/quality trade-off. In particular some random jitter on the subsampled points could reduce moire effects.

I do think the 'fast' antialiasing option is pretty useful. The results are indistinguishable from 'best' and much faster. The only difference is we guess 'well, this pixel is the same as it's neighbors so the subsampled are probably the same too' and just skip that pixel.

@mindhells
Copy link
Member Author

mindhells commented Jun 8, 2020

I'm dropping here a couple of interesting entries from wikipedia:

The 1st one took me to the 2nd, in which you can see different subsampling patterns. I understand we're currently using uniform distribution and you think the random approach could also be worth checking.

The fast enhancement could apply to every of those methods I think. It's a great improvement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants