-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FATL| Cuda Error: invalid pitch argument #35
Comments
The stack trace unfortunately contains little information, since it seems that the binary was compiled without debug symbols. If you change Apart from that, what I would check is whether the program is able to correctly load the image files, for example by printing the image dimensions, or using the In |
I've compiled with RelWithDebInfo and Debug, none of them gives stack trace with source code info. Should I give some argument while running the executable? Sorry for inconvenience, my C++ skills are very rusty. After program starts to run it crashes so I can't test Regarding the calibration, I've started live input and then get the new matrix from here. Would that approach work? |
Oh, that stack trace probably comes from the fatal log output then, and not from gdb? In that case, you can run the program in the gdb debugger by prepending: Then, once it crashes, type Another thing to consider with CUDA errors is that they may be reported asynchronously. In that case where the pitch is wrong I would suspect this not to be the case, though. The approach for the intrinsics should work as long as you record the undistorted images (that are passed by the live input functionality to the SLAM system) and not the original images provided by the camera. |
Below is the output of I am trying to see the performance of the algorithm with the videos I record using Azure Kinect. So what I did is, Kinect exports them to .mkv file, then I extract depth and rgb images from that. Then I build a dataset folder like above and open them in Dataset Playback. Do you think there is an easier way for my case? If so I can also adopt that approach.
|
It usually shouldn't crash the program, and definitely not in this place, but if using original images I would expect the results to be so bad that I probably wouldn't even bother trying. The images must be processed in the way the live input does it in order for the calibration model to fit. The easiest way to achieve this might be to insert a few lines of code into the live input code to directly save the pre-processed images during recording, but that requires running the SLAM program while recording. Otherwise, the transformations must be applied to the saved images. Thanks for the new stacktrace, this shows that it crashes when attempting to transfer the depth image to the GPU (in |
Running SLAM code while recording is problematic for me because I was only able to install dependencies to a desktop and I can't move it :) I'd like to apply transformations while I get input. I guess I'll need to follow RGB dimensions and depth dimensions are different. I just resized them through imagemagick to check if error disappears, now it says |
Yes, that would be one way to approach it. Sorry for not responding earlier, I did not notice your later edits since I don't get notification e-mails for them. |
I'm exploring the Dataset Playback. It works fine with ETH cables_1 dataset. I've mimicked the structure of dataset with the recording I get from Azure Kinect but I get the following error. Since it works on other dataset I've a feeling that I made a mistake while constructing the dataset rather than having a CUDA issue. At the very bottom you can find more info about the dataset folder I use.
The structure of dataset folder
Content of
associated.txt
I've filled calibration.txt with the values printed in here, Line 236-239.
The text was updated successfully, but these errors were encountered: