Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem about 'postprocess.p' output #2

Closed
Fanziapril opened this issue Aug 21, 2017 · 9 comments
Closed

Problem about 'postprocess.p' output #2

Fanziapril opened this issue Aug 21, 2017 · 9 comments

Comments

@Fanziapril
Copy link

The code could run successfully, but the output is random. The output of the deep network is good but when doing 'Affine Align', the alignment results between the depth and 3D template are random. The results are different when running the code several times. Sometimes only partial face is reconstructed. I think this may be the depth cannot align well with the template. The 'postprocess.p' is protected so I cannot figure out the problem.

@matansel
Copy link
Owner

OK. Can you please share the image?

@Fanziapril
Copy link
Author

When I run for the first time, the output of the deep network is:
1-1

1-2

This is good. And During the postprocess function:
screenshot from 2017-08-22 10 30 47
This is a good alignment.
screenshot from 2017-08-22 10 31 05
screenshot from 2017-08-22 10 33 21

And the final result is:

1-31-4

When I run for the second time, the output of the deep network is:
2-1
2-2
This is slightly different from the first time.

During the postprocess function:
screenshot from 2017-08-22 10 35 42
screenshot from 2017-08-22 10 36 00
screenshot from 2017-08-22 10 36 39
screenshot from 2017-08-22 10 37 47

The liagnment between the depth and template is bad, then the final result is partial:

2-42-3

I don't know why this happened... I am running the code on the Ubuntu 14.04 LTS.

@matansel
Copy link
Owner

matansel commented Aug 22, 2017

So obviously the result of the network is different than what we've got. Since concolutional layers are not scale invariant, it is important to use the same scale of the face in the input image as in the training images. Did you use the same face cropping we used? Maybe the face detector provides different bounding box than ours. Try to continuously increase the size of your bounding box around the face such that the face will take a smaller portion of the image.

@Fanziapril
Copy link
Author

Thanks for your quick reply. I am using the exact same face detector(vision.CascadeObjectDetector and all the property is default) as you are. But I am using MATLAB 2017a, maybe the face detection algorithms are different between different MATLAB versions.

I manually cropped the bounding box and try to make it looks like the image showed in your paper(Fig 10).
FaceDetect = vision.CascadeObjectDetector;
BB = step(FaceDetect,img);
scale = 1.1;
BB(1,1) = BB(1,1) - 0.5*(scale-1)*BB(1,3);
%BB(1,2) = BB(1,2) - 0.5*(scale-1)*BB(1,4);
BB(1,3) = scale*BB(1,3);
BB(1,4) = scale*BB(1,4);
imsz = size(img);
This solved the problem.
However, there is another problem: I directly used the cropped image, but still get different final results.
The cropped image is:
crop
first running:
tmp6tmp5

second running:
tmp7tmp8

I think the bounding box should not be the reason. Is there any possible that the output of deep network or postprocess are different when running the code for several times?

@matansel
Copy link
Owner

Could you please attach the result of the network of the right scale?
The postprocessing step is highly dependent upon the result of the network and usually doesn't fail.

@Fanziapril
Copy link
Author

Yes. I listed teo different results when I running the code using the same cropped image:
First time:
run1
Second time:
run2
(From the output of network, there are small differences in the left cheek)
First time:
run1-1
Second time:
run2-1
First time:
run1-2run1-3
Second time:
run2-2run2-3

@matansel
Copy link
Owner

The result of the network is still poor because the scale of the face in the image is different than the paper. Perhaps also the relative position of the face in the image is also set inaccurately. Try to run the face detector of Matlab 2015a if you can.

@Fanziapril
Copy link
Author

I've tried Matlab2013a, 2014a, 2016a(I didn't find a 2015a in my lab), could you please send me the cropped image so I can find where my problem is?

@matansel
Copy link
Owner

The input image to the networks is the following:
sample_img

And this is these are the outputs you should get:
Correspondence map:
sample_pncc

Depth map:
sample_depth

The networks are deterministic, so you should get the exact same outputs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants