New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
imag2shape #57
Comments
According to your description, it seems that the normal regression is not good enough. |
Thank you very much for looking! I have sent the caffe model and octrees to your email. |
I saw your results. According to your caffemodel, the network is only trained for 22000 iterations. Please follow the solver we provided (https://github.com/Microsoft/O-CNN/blob/master/caffe/examples/ao-cnn/image2shape.solver.prototxt), and train the network for 350000 iterations. |
yes however, i changed batchsize to 384, so therefore i reduced iterations from 350,000/10 = 35,000. Then i also trained on multi-gpu (2 gpus) with the --gpu=0,1 commandline option, so i reduced the iterations in half, to 22,000 iterations.
|
If the hyper-parameters are changed, the results may be quite different. |
Ok, running it now without batch_size increase + iteration decrease. I.e., i'm using the exact solver parameters unchanged. However, I notice in the solver parameters, the net source is image2shape_resnet.train.prototxt, while in the repository, there is only image2shape.train.prototxt. So I changed it to image2shape.train.prototxt. Will this be ok? |
Yes, it is a typo, and I have fixed it. Thank you! |
Hi, so I ran at batchsize 32 with all your default hyper-parameters and it works great! It's a duplicate of your paper. However if I try different batch sizes, the "normal regression" as you say seems to fail and I get the problem I mentioned, only horizontal and 45 degree patches. Is this to do with the the learning rate and step value hyper-parameters? e.g. do the following hyperparameters need to be refined to increase batchsize and have the normal regression proceed properly: or is it something more fundamental about the batchsize? |
As far as I know, it seems that there is on solid theory about the batchsize. For our network, you can try to remove the caffe layers whose type are "Normalize". The "Normalize" layers are used to normalize the length of normal to be 1, I have observed that after removing the "Normalize" layers, the normal regression converges faster. You can add back these "Normalize" layers in the testing stage. If you want to use multi-GPUs, this paper (https://arxiv.org/pdf/1706.02677.pdf) decribes some guidelines to tune the batch size and learning rate based the single-GPU paramters. |
Hi, I tried running your AO-CNN instructions (windows 2012), and everything worked great.
But when I look at the .obj results in meshlab, all the patches seem to be either horizontal or 45 degrees almost no variation between those.
it seems to get the general overall shape, but the patches are all either horizontal or the same 45 degree angle. So it looks like their are only 2 orientation for the patches.
Is there a reason why this might be? I was expecting the results would look similar to those in the paper. I thought maybe it is because of low resolution in the provided dataset, but I think it is the same resolution as in the paper.
I tried using a different mesh viewer for .obj but the meshes were the same.
The text was updated successfully, but these errors were encountered: