Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where is a Refinement Network? #45

Closed
mini102 opened this issue May 16, 2022 · 4 comments
Closed

Where is a Refinement Network? #45

mini102 opened this issue May 16, 2022 · 4 comments

Comments

@mini102
Copy link

mini102 commented May 16, 2022

Hi, I'm a student studying your project, DewarpNet!
First of all, Thank you for sharing this awesome project :)

I read a paper, and Refinement Network to adjust for illumination effects in the rectified image is obviously recorded in the paper.
But, I can't find this network in this code.

Is there any training code or pretrained model?

@sagniklp
Copy link
Member

sagniklp commented May 16, 2022

There is not much extra code. It is similar to training WC regression. Just use the UNet and WC regression code. Modify the input-output and remove the gradient based loss.

  1. We use the same UNet to regress surface normals (SN). Train it to convergence. Input and output is 3 channels.
  2. Use the trained DewarpNet model to unwarp the the SN and the input image.
  3. Now train the shading map regression using the same UNet. Concatenate the unwarped input and the regressed SN to use as input. Input is 6 channels and output is 3 channels.
  4. Remember to fix SN regression while you train the shading map regression. GT shading map can be calculated from the input images and the albedo images.

I can share the pre-trained model next week after NeurIPS is over.

@mini102
Copy link
Author

mini102 commented May 16, 2022

Thank you very much for the detailed explanation!

@mini102
Copy link
Author

mini102 commented May 18, 2022

Hi, As your explanation, I'm trying to train SE in the refinement network, but I have a problem.
can I make GT shading map (S) by element-wise dividing A from I? (I is unwarped Image by backwardmapping distored original image with flow map(GT of Texture mapping network) , A is unwarped Albedo Image by backwardmapping distored albedo image in doc3d with flow map(GT of Texture mapping network))
I understand that way, but still uncertain.
I'll very happy if you let me know.

@sagniklp
Copy link
Member

Yes, you get the GT shading maps like that, since I= A*S. Use np.divide to avoid dividing by zero. I will suggest doing the division first then unwarp the shading map using the GT BM. BM is sometimes incorrectly interpolated so you may have erroneous shading values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants