-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FileNotFoundError of pointcloud #7
Comments
I saw your early reply, but I still haven't solved this problem |
the point cloud is in checkpoint file, since each point has optimizable features, they are considered network parameters. If you have follow exactly the steps in README, you should have the checkpoints file in your checkpoints/nerf_synthetic/hotdog/ or ship folder that provides MLP and points with point features (in .pth files), and it should not go to line 248 of neural_points.np or line 118 of load_blender.py, Please let me know if you change anything in the original script or your pth files are not in place at all. I took quit a lot effort to find a new machine and re-deploy everything, but haven't encounter any such error. |
Thanks a lot for your prompt reply ~ Combining your specific reply and paper, I figured out the principle of point cloud generation. thank you again. After I re-downloaded checkpoints and dev_scripts and replaced the old version, the error went away. Maybe when I changed gpu_ids in scripts, I accidentally changed something else? I compared the old and new sh line by line, but I didn't check it out. That's odd. However, another error has occurred:
So I just got the images result ,an empty vids folder and no points folder. (in the directory In the Visualizer.py, I think the test result should have images, points, and vids.
I tried to find out if Context.pop() is missing in the code, but I didn't find it. My environment is built on 2080ti, and the key-dependent libraries are:
I installed the libraries according to the version in README, could you please tell me the reason for this |
hi, the pycuda error is not fixable for now since the integration of pycuda and pytorch is tricky, we have context popped but somehow it's not clean. |
Hi, I'm wondering how do you get ''0_net_ray_marching.pth'', bacause when I move all ''*_net_ray_marching.pth'', It goes wrong with FileNotFound error. And I didn't find any instructions in the readme.md. |
im not sure why do you remove all the check point files, but if you want to get 0_net_ray_marching.pth, you can start training from the scratch by following "Per-scene optimize from scatch" in readme |
I followed your readme to run "Per-scene optimize from scatch" to get the 0_net_ray_marching.pth ,but it goes wrong with error: " No such file or directory:'' " It went to
I don't know how to change the setting .It will be very grateful for your response. |
hi, you have the exact same error discussed previously in this thread, can you re-download the checkpoints and datasets and make sure they are in place? the exact error has been solved by following the README instruction step by step. |
Hi , when I redownload the code repo and run from scratch ,It still encounters with the problem of "No such file or directory" when I run scene101.sh ,However it works fine if I run from scratch in the dataset of Nerf Synth. Is there some possiblity that I should change some setting for the scannet scene. |
should I set opt.load_points=0 when i run |
You should not change anything, because all the scripts are runnable if you follow all steps correctly |
@Xharlie I solved the error.In |
so, just set |
thanks your amazing paper and release source code generously again~~~ :) |
hi, if you check google drive, there are files like 200000_net_ray_marching.pth for all nerfsynth objects. sometime google drive will divide the content in folder and zip them separately during download, you have to manually unzip and move these files together. Just make sure your content matches the google drive, there will not be any problem running any of my script |
OK, I'll try as you said. thank you~~ |
I think the reason why they all encountered the same problem is that you placed two exactly same .sh file named scene101.sh and scene101_test.sh. If you check your w_scannet_etf directory, you will find they both executed "test_ft.py", however, based on code in scene241.sh and scene241_test.sh, it should be "python train_ft.py" and "python test_ft.py" respectively.
|
i also think so. clearly test_ft.py can not train (Per-scene optimize) at all. |
Hi, thanks for your great job! I have the same problem when I run "bash dev_scripts/w_n360/ship_test.sh". It goes with "FileNotFoundError: [Errno 2] No such file or directory" in line 118 of load_blender.py. My datapath is pointnerf/data_src/nerf/nerf_synthetic/ship,and it includes 3 .json files and 3 folders containing some .png pictures. At the same time, the checkpoints folder only include some .pth files. It doesn't seem to contain the saved point cloud.
Could you please tell me where the "point_path" is? Thank you~
The text was updated successfully, but these errors were encountered: