Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

imag2shape #57

Closed
FredFloopie opened this issue Mar 6, 2019 · 9 comments
Closed

imag2shape #57

FredFloopie opened this issue Mar 6, 2019 · 9 comments

Comments

@FredFloopie
Copy link

FredFloopie commented Mar 6, 2019

Hi, I tried running your AO-CNN instructions (windows 2012), and everything worked great.

But when I look at the .obj results in meshlab, all the patches seem to be either horizontal or 45 degrees almost no variation between those.

it seems to get the general overall shape, but the patches are all either horizontal or the same 45 degree angle. So it looks like their are only 2 orientation for the patches.

Is there a reason why this might be? I was expecting the results would look similar to those in the paper. I thought maybe it is because of low resolution in the provided dataset, but I think it is the same resolution as in the paper.

I tried using a different mesh viewer for .obj but the meshes were the same.

@wang-ps
Copy link
Contributor

wang-ps commented Mar 7, 2019

According to your description, it seems that the normal regression is not good enough.
Please send the one of the generated octree file and the trained caffe model to me (wangps@hotmail.com) so that I can figure out the reason.

@FredFloopie
Copy link
Author

Thank you very much for looking! I have sent the caffe model and octrees to your email.

@wang-ps
Copy link
Contributor

wang-ps commented Mar 7, 2019

I saw your results. According to your caffemodel, the network is only trained for 22000 iterations. Please follow the solver we provided (https://github.com/Microsoft/O-CNN/blob/master/caffe/examples/ao-cnn/image2shape.solver.prototxt), and train the network for 350000 iterations.

@FredFloopie
Copy link
Author

FredFloopie commented Mar 7, 2019 via email

@wang-ps
Copy link
Contributor

wang-ps commented Mar 7, 2019

If the hyper-parameters are changed, the results may be quite different.
Please follow the solver and paramters we provided to reproduce our results.

@FredFloopie
Copy link
Author

Ok, running it now without batch_size increase + iteration decrease. I.e., i'm using the exact solver parameters unchanged.

However, I notice in the solver parameters, the net source is image2shape_resnet.train.prototxt, while in the repository, there is only image2shape.train.prototxt. So I changed it to image2shape.train.prototxt.

Will this be ok?

@wang-ps
Copy link
Contributor

wang-ps commented Mar 7, 2019

Yes, it is a typo, and I have fixed it. Thank you!

@FredFloopie
Copy link
Author

FredFloopie commented Mar 13, 2019

Hi, so I ran at batchsize 32 with all your default hyper-parameters and it works great! It's a duplicate of your paper.

However if I try different batch sizes, the "normal regression" as you say seems to fail and I get the problem I mentioned, only horizontal and 45 degree patches.

Is this to do with the the learning rate and step value hyper-parameters? e.g. do the following hyperparameters need to be refined to increase batchsize and have the normal regression proceed properly:
base_lr: 0.1
momentum: 0.9
weight_decay: 0.0005
lr_policy: "multistep"
gamma: 0.1
stepvalue: 150000
stepvalue: 300000
stepvalue: 350000

or is it something more fundamental about the batchsize?
Thanks again.

@wang-ps
Copy link
Contributor

wang-ps commented Mar 13, 2019

As far as I know, it seems that there is on solid theory about the batchsize.
If the batch size is changed, the learning rate and step value should also be properly tuned, and perhaps better results can be acheived.

For our network, you can try to remove the caffe layers whose type are "Normalize". The "Normalize" layers are used to normalize the length of normal to be 1, I have observed that after removing the "Normalize" layers, the normal regression converges faster. You can add back these "Normalize" layers in the testing stage.

If you want to use multi-GPUs, this paper (https://arxiv.org/pdf/1706.02677.pdf) decribes some guidelines to tune the batch size and learning rate based the single-GPU paramters.

@wang-ps wang-ps closed this as completed Mar 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants