Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about your data augmentation method and CIE XYZ color space #9

Closed
DY112 opened this issue Aug 2, 2022 · 2 comments
Closed

Comments

@DY112
Copy link

DY112 commented Aug 2, 2022

Hi, @mahmoudnafifi , I have a question about your data augmentation method.

I think that I have a little confusion about the color space transform process in general ISP, which converts WB applied raw image into CIE XYZ color space.
(I referenced the 2019 ICCV tutorial by your supervisor, Professor Michael Brown)

As far as I know, CST only changes the axis(or basis) representing the color, it doesn't change the unique color itself.
So from what I understand, the CIE XYZ images (with WB applied) for the same scene on two different devices are different because they represent colors (unique colors, which are different from each other) observed by different sensors in the canonical color space (axis, CIE XYZ space).

However, according to the data augmentation method presented in the paper, the above sentence I said is wrong.
According to the method used in your paper, since images in CIE XYZ space are device-independent, data augmentation in RAW corresponding to each device is possible using conversion/inverse transformation to CIE XYZ space.

I'd appreciate it if you could let me know which of the two is correct in the part where I'm mistaken.

@mahmoudnafifi
Copy link
Owner

Hi @DY112,

Thanks for your question. I think there is some misunderstanding. Suppose we have an object with uniform reflectance properties with a single diffuse color and it is let by a uniform light. If this object was captured by 2 different camera sensors which have different sensitivities, the captured images may have two different RGB values of this object's pixels. If camera calibration perfectly projects colors from the sensor space to the CIE XYZ space, we should have precisely the same chromaticity values of this object's pixels. That is, the color of this object should be represented by the same values in the CIE XYZ space. However, this projection to CIE XYZ is not always perfect and practically we may have some color differences even after this projection. In the paper, we followed the theory and assumed the calibration 3x3 matrix accurately maps to the CIE XYZ space. This assumption is valid only if the sensor satisfied the Luther condition.

Hope this helps.

Thanks!

@DY112
Copy link
Author

DY112 commented Aug 7, 2022

Thanks for the detailed information.
As I expected, I misunderstood, and the conversion to CIE XYZ color space is a calibration operation that corrects the difference between the two camera sensors.

Thans!!

@DY112 DY112 closed this as completed Aug 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants