-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RealSense Support #14
Comments
What configs should I use for RealSense D435i? |
Hi, the error could be due to the following reasons:
How many iterations of tracking and mapping are you using? Also, what's the frame rate of the capture, and how much is the sensor moving between each frame? In the current version, SplaTAM's number of tracking iterations depends on the camera motion between the frames. An easy way to debug this would be to set the following config to SplaTAM/configs/iphone/splatam.py Line 64 in df59ee2
You can also set the SplaTAM/configs/iphone/splatam.py Line 59 in df59ee2
|
Hi, the depth scaling & intrinsics are correct, I am sure. The intrinsics in above picture is not the newest. I use 40 iterations of tracking and 60 iterations of mapping, just the same as the config file of Replica. The frame rate is 10 FPS, maybe the frame rate is too low. I will use 30 FPS to have another test. Thank you! By the way, I wonder if there is a way to visualize the whole process of reconstruction? The words in the terminal are not quite intuitive. Can I visualize the whole process for RealSense data? |
I recommend not using the Replica Config file as your base config. The iPhone config file is more reasonable in terms of learning rate and other parameters. Unfortunately, we don't currently have a way to look at the reconstruction while SplaTAM is running. That's in the work. We generally use the |
Okay! |
Hi,I have collected data many times. During the collection process, I move the camera slowly and walk slowly. I use 30 fps to collect the data, but finally the code still shows cuda is out of memory. The 32G memory is used up. What should I do? What's wrong with this?The resolution is 1280 * 720. If |
If I provide external camera pose and do not want to use your tracking method, what should I do? |
Also interested in this! Starting to try with Realsense D455 |
Hi @hhcxx2006, it would be great if you could share the data so that we can take a look (if things aren't still working). Thanks for your interest in a RealSense demo. We will potentially consider this for V2. |
I'm using the config file for replica to test datasets collected by realsense d435.It turns out that the mapping quality is not good enough,especially the edges and corners cannot be reconstructed.(The intrinsics and depth scale are set accurately).Can I get some suggestions on selecting configuration parameters? |
Just set use_gt_pose=True, as `configs/replica/splatam.py': tracking=dict(
use_gt_poses=False, # Use GT Poses for Tracking
forward_prop=True, # Forward Propagate Poses
num_iters=tracking_iters,
use_sil_for_loss=True,
sil_thres=0.99,
use_l1=True,
ignore_outlier_depth_loss=False,
loss_weights=dict(
im=0.5,
depth=1.0,
),
lrs=dict(
means3D=0.0,
rgb_colors=0.0,
unnorm_rotations=0.0,
logit_opacities=0.0,
log_scales=0.0,
cam_unnorm_rots=0.0004,
cam_trans=0.002,
),
), But in my experience, even if I use gt_poses, the mapping result isn't good enough. I come up with some reasons for this :
|
Hello, have you solved this problem? Even if I set the parameters very low, my computer will still run out of GPU memory. |
Hi, I wonder how to use my own RealSense data in this project. I use your realsense dataloader in
After I execute `python3 scripts/splatam.py configs/realsense/splatam.py`, the speed is super slow than Replica dataset. And after a while, it shows an error that cuda is out of memory. However, I use Tesla V100 which has 32G memory. Is the memory enough? And I want it to execute faster like Replica dataset. What can I do? Thank you! Here is the configs/realsense/splatam.py file [splatam.zip](https://github.com/spla-tam/SplaTAM/files/13611133/splatam.zip)datasets/gradslam_datasets/realsense.py
. There are total 1777 frames, and each frame resolution is 1280*720. The config file is like below. And I initialize the camera pose withP = torch.tensor([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]]).float()
, the num is equal the number of framesThe text was updated successfully, but these errors were encountered: