Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't get a right nerf resualt #16

Closed
nypyp opened this issue Sep 12, 2023 · 8 comments
Closed

Can't get a right nerf resualt #16

nypyp opened this issue Sep 12, 2023 · 8 comments

Comments

@nypyp
Copy link

nypyp commented Sep 12, 2023

Hi, I'm confusied when I run orb_slam3_ros with nerf_brige, the viewer 's image stay still in the center and never have another image shows up, and the nerf nerver work good, is there anything I do wrong? Or is there any rosbag could verify the configuration is right?
nerf_studio

@Eric-Ho-Matrix
Copy link

Looks like we are facing the same problem. My implementation also have this prob, Nerfstudio basically stuck at the first several images and I also cannot get a fairly good model.

@javieryu
Copy link
Owner

Looks like your images are grey scale? Maybe this has something to do with it?

Can you give some more details on your setup?

Also, due to issues with the Nerfstudio Viewer there is no easy way to update the image locations in the viewer as they come in. So while images and poses are being added to the training set they may not appear in the viewer.

@bharath-k1000
Copy link

Hi, I am also facing the same issue. The input image stream is not graycaled for me but i am getting the same result in viewer. the topic /camera/color/image_raw has rgb image data and is written correctly in the .json file.

I am using an an Intel realsense d435 rgbd camera (no imu).

image

Orbslam3 seems to be working fine seeing the the features that are tracked and the pointcloud. But nerf generation seems to be stuck.

  • ros 1
  • ubuntu 20.04.

@bharath-k1000
Copy link

UPDATE:
Issue was resolved by changing the camera parameters in the .json file.

@lvmingzhe
Copy link

I'm sorry for replying in a closed issue, because I encountered the same problem. The dataset comes from tum_rgbd_dataset_freiburg1_xyz, and I used thien94 orb_slam3_ros for pose estimation, configuring the corresponding ns_orb3_tum.json

{
	"fx": 525.0,
    	"fy": 525.0,
    	"cx": 319.5,
    	"cy": 239.5,
    	"k1": 0.0,
    	"k2": 0.0,
    	"k3": 0.0,
    	"p1": 0.0,
    	"p2": 0.0,
    	"H": 480,
    	"W": 640,
	"image_topic": "/camera/rgb/image_color",
	"pose_topic": "/orb_slam3/camera_pose"
}

After then , I continue to run by step 1 .

roslaunch orb_slam3_ros tum_rgbd.launch

step 2 .

rosbag play ~/data/rgbd_dataset_freiburg1_xyz.bag

step 3.

python ros_train.py --method_name ros_nerfacto --data /home/hello/code/nerf_bridge/ns_orb3_tum.json --pipeline.datamanager.data_update_freq 1.0

finally got this result looks not good.
Screenshot from 2023-11-01 12-19-09

could you please give me some suggestions? Thanks.

@lvmingzhe
Copy link

I solve this problem by following thien94/orb_slam3_ros#7 (comment)
change RGB.DepthMapFactor to 1.0 in TUM<version>.yaml

@Augusthyq
Copy link

UPDATE: Issue was resolved by changing the camera parameters in the .json file.

Sorry, I have encountered the same problem. How you change your json file?

@javieryu
Copy link
Owner

@Augusthyq Happy to help if you can open up a new issue, and include your setup details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants