-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Something is wrong on some of the output images? #38
Comments
Looks like some dimensions are transposed, are you having the same issue with the pretrained models? |
@mgharbi Thanks for your reply. Yes, it is! Here are some of the results below. The left one is processed by pretrained model of hdrp, and the right one is processed by pretrained model of instagram. I think using other pretrained models will also get the similar issue. |
Ah crap, if your TF might have changed some the internal storage pattern of the tensors somewhere, this is definitely not the expected output. The provided sample image sample_data/input.png should work, if not we'll have to dig deeper. One way to debug this would be to isolate the "bilateral_slice_apply" operator and feed it with simple images e.g. an image with R=1 G=2 B=3, and a bilateral grid that contains all zeros except for one of the 12 coefficients. For instance with coef0=1 only, the output should be coef0R + coef1G +coef2B + coef3 = 11 = 1 everywhere, if that is not the case something got transposed |
@mgharbi Only partial test output results are bad and the others are good. There is no problem on the output of the sample image. In addition, I found that the guidance map of the input image was already corrupt. |
@mgharbi @cfanyyx Hi, i have the same issue with just some images, depending on their shape. hdrnet_legacy/hdrnet/models.py Lines 177 to 185 in b06d011
which learn to convert an RGB image in a grayscale one, with guidemap = tf.image.rgb_to_grayscale(curve) the problem disappears.I still don't know why but maybe @mgharbi has some ideas about it. |
This fix surely would guide you a guide, but you will not be able to learn a better occupation of the grid (via the tonecurve). I would also be surprised if the conv2d layer was bogus, and I suspect the logic that comes before that to have turned stale: hdrnet_legacy/hdrnet/models.py Lines 154 to 173 in b06d011
If the output looks like an image (no wieird transpose), the next steps would be:
|
@mgharbi @cfanyyx I think I found a workaround for the issue which let us maintain the convolutional layer. You need to run the multiplication in L157 and the convolution in L177 with the CPU. I don't know the reason but I think that it is due to a mismatch between the way that GPU learned to multiply the two matrix and the shape of the input we want to make inference. |
I used the default settings and params. After training on my own datasets, in some of my test output images using the saved model, there are strange green areas on them like the image on the right below(the left one is the origin test image). Is this related to the Bilateral Learning algorithm?
The text was updated successfully, but these errors were encountered: