Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error rate tested different with the results of jzbontar #32

Open
ckwllawliet opened this issue May 26, 2017 · 2 comments
Open

Error rate tested different with the results of jzbontar #32

ckwllawliet opened this issue May 26, 2017 · 2 comments

Comments

@ckwllawliet
Copy link

@jzbontar Hello, I met some problems when I tried to reproduce your work. When I tried to compute the error rate as it in README, for example, by typing $ ./main.lua kitti fast -a test_te -net_fname net/net_kitti_fast_-a_train_all.t7, I got a much greater error rate than it written in the paper, about 11%. I don't know what's the problem, and the error rate I tested are all greater than the results in paper. I also tried to test on the network you have trained. However, the results are the same as I got before. And the time I tested is less than yours. It just like I skipped some steps, while I didn't find the problem. I want to ask you if you have any idea about it, or there are some thing I need to consider when I'm testng? I also want to ask if there are any persons met the same problem as me? Looking forward for your reply, thanks!

@Sarah20187
Copy link

Sarah20187 commented Jun 12, 2017

@ckwllawliet I also found the same problem after I updated torch. When I was using debug mode, I found that the matching cost of initial disparity map in regions that width=1 : disp_max equal NaN.
Now I realize that this problem is caused by the how min() functions dealing with NaN values. Obviously, it has changed in new torch version. After I removed all NaN values before using min() function. The result became better but still worse than before. Could you help us fix this problem? @jzbontar

@ComVisDinh
Copy link

ComVisDinh commented Jun 25, 2017

I solved the problem.
In main.lua file, change

vols = torch.CudaTensor(2, disp_max, x_batch:size(3), x_batch:size(4)):fill(0 / 0)
vol = torch.CudaTensor(1, disp_max, output:size(3), output:size(4)):fill(0 / 0)

to
vols = torch.CudaTensor(2, disp_max, x_batch:size(3), x_batch:size(4)):fill(1.0 / 1.0)
vol = torch.CudaTensor(1, disp_max, output:size(3), output:size(4)):fill(1.0 / 1.0)

This change sets default values for occluded pixels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants