This pull request is not as straightforward as the other one, curious what you think.
From what I can gather from the spec, there are basically two forms of alpha, associated (ExtraSamples value of 1) and unassociated (ExtraSamples value of 2). Associated alpha is premultiplied, and unassociated is not, that being the only technical difference.
So I think for associated alpha, the correct thing to do is to divide R, G and B by alpha, to get rid of the premultiplication. I didn't actually do that in this commit.
For this pull request, I just had it recognize both 1 and 2 as alpha. For my test image which has 2 set, it ends up rendering the image the same way as the GIMP does.
added support for unassociated alpha (ExtraSamples value 2)
The spec is somewhat convoluted when it comes to transparency. AFAICT, there are basically two things that are involved in transparency: (1) including more than 3 entries in BitsPerSample; and (2) setting ExtraSamples to describe something about those extra samples.
But the value of ExtraSamples seems to do double duty: On the one hand, it says how many extra (i.e. more than 3) samples there are in BitsPerSample. But on the other hand, you are then supposed to extrapolate from that that having 1 extra sample means it contains associated alpha data and having 2 extra samples means it contains unassociated alpha data.
Now, assuming that, if you have an image with ExtraSamples=1, then the RGB values are supposed to have been modified to already include the alpha information in them (whatever that means), thereby leaving the extra sample to remain extra, unless you need to do some math with it.
What this means for canvas pixels is not exactly clear to me, here's how the code works: It takes the opacity value (the 4th sample), normalizes it onto the range [0,1], and then sends that along as the opacity value in the rgba() declaration, which seems to give the right result. Since the opacity value was calculated on a black background, forcing the opacity value to be 1 in the rgba() declaration changes all transparent areas to black.
The part that isn't clear to me, then, is whether this is producing the expected result, or whether it merely looks "close enough". I don't know what would happen if the opacity value was divided out of the RGB values first, before sending them along in the rgba() declaration.
And that's all just if ExtraSamples=1 (AKA associated alpha). I don't have any idea what to do with ExtraSamples=2 (AKA unassociated alpha). AFAICT reading the spec, it may be that it can simply be discarded. Or else be used to decide whether a pixel is displayed or not (boolean 1 or 0). But if there are two extra samples in that case, what does each one mean? The spec seems a bit hand-wavy in this regard.
And on top of all that misunderstanding, I don't believe I actually have a testing image for ExtraSamples=2, to confirm what you say or any code that I might write to handle it. Is there any way you can point me towards or send me one?
premultiplied alpha is a common performance optimization. When compositing two images on top of each other, typically what you do is multiply the alpha channel into the color channels, then from there do the compositing. By having premultiplied alpha, that step can be skipped.
But if you are using fillRect, then you don't want premultiplied alpha, and so you need to undo that with division. If an image lacks alpha altogether or has its alpha set to 1, then you're just dividing by one and ending up with the same values, which might be why many images will look correct even if you don't account for the premultiplied.
I will send my image when I get a chance, sometime this weekend. It is using ExtraSamples set to 2 and does not render correctly in tiff.js as is. Because its alpha channel was not being considered.
If there are any other open source TIFF parsers, we can take a look at what they do in this situation.
But it's not just "many images" that look correct. It's an image that actually has alpha transparency. I'm using a modified version of the strike.tif test image that comes with libtiff, modified in Preview to use no compression (or to use packbits compression) rather than the LZW compression that it comes with. And in both cases, the opacity is (seemingly) applied correctly, and attempting to divide by opacity in a few locations I've tried results in a broken image. So someone somewhere is missing something.
As for other open source parsers, I think libTIFF is the big one; it's a full-on C library, AFAIK.
This discussion on the libTIFF mailing list shed some light on the subject:
The way opacity is working with ExtraSamples=1 right now seems to be the correct way to do it. The opacity value would be the same whether ExtraSamples was 1 or 2. The only difference is whether that value is already represented in the color values, too.
So either your patch is correct, or ExtraSamples=2 will require additional multiplication. Once I get that test image, we can find out for sure.
Also, to clarify: The fourth channel is always the alpha channel (either associated or unassociated), no matter what. Anything beyond that (5+) is just unspecified data.
I was able to use ImageMagick to convert my file to unassociated alpha. Given that alpha is always supposed to be the same, it seems like you were right that opacity in ExtraSamples=2 can be treated just like opacity in ExtraSamples=1. As such, I've removed all the fancy looping and such, and just made the opacity treatment standard for all 4th samples: commit a2153f8.
It seems to work for me. Please confirm.
Actually, I'm an idiot.
Implemented pretty much your exact change in commit dc9da3d.
Cool! Thanks for working through that. It's kind of silly that 1 and 2 are basically the same thing, but probably not too surprising that specs like this have some weird cases here and there.