-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: sym_strides() called on an undefined Tensor #596
Comments
Do you get an error if you use This is a blind guess, but are you accessing the underlying C pointer for any tensors? I couldn't see what is happening inside |
error for autograd_mode="dense":
|
That's A LOT of memory that was trying to be allocated. What's the size of your optimization problem? In particular:
Also, I didn't see anything too strange inside |
@luisenp repro script here:
|
By setting smaller w and h, try
baspacho_sparse_solver only accept device name as "cpu" or "cuda", but mine is "cuda:0" |
After disabling the device check by comment line 775 in objective.py, 'autograd_mode="dense"' works.
|
Thanks! I played a bit with this and it seems that I also ran with autograd mode dense, with a smaller image size (48 * 64), and the code also works. However for your full image size I'm not sure you'll be able to run our optimizers. You have two optimization variables of size 480 * 640 = 307200, so you'll end having to solve a dense linear system with matrix size of 614000 * 614000, which is something like 1.5TB of memory. Maybe you need to revisit how you are formulating this problem? |
Oh sorry, I missed your most recent messages. I'll open an issue for the Baspacho bug, thanks for catching this! |
Thanks, Do you have any suggestions about the choice of linear_solver and linearization for this problem? |
For problems with many cost functions and sparse connectivity, Baspacho is usually the best choice. On the other hand, if you have a single cost function, then there is no advantage from using a sparse solver, because the linear systems will be dense; in that case CholeskyDenseSolver is probably better. When there are only a few cost functions, it's hard to say, might require some experimentation. That being said, for your current formulation the main issue is the extremely large number of optimization variables. Are you able to reformulate this problem in some way so that the dimension of the optimization variables is lower? |
Thanks, I will try sparse depth encoding. |
@EXing We don't have a Matrix class, unfortunately, as we prioritized other features. We do welcome community contributions and pull requests, so, if you are interested we can give you some guidance on how to get started. |
I will try but don't have enough time to do this.
|
By downsampling depth (60, 80) outside and upsampling (480, 640) inside cost function, the optimization is able to run. |
I found out the dense solver is faster. But |
The problem my not be necessarily the use of view but some combination of view with the way the tensor is created, or with other operations. What I know is that when I replaced the operation That's strange about LUDenseSolver working but CholeskyDenseSolver not. Do you have a repro? |
LUDenseSolver only works with LevenbergMarquardt |
If you are using GaussNewton, then that's not too surprising, because we don't use damping when solving the linear system, and you can get the kind of error you reported above in some cases. |
still the same code
|
That's not necessary, even when using |
It's been a while so I'll close this issue. Feel free to reopen if this exact problem still persists. |
❓ Questions and Help
Hi, do you have any idea about this issue?
The code has two kinds of loss to do pose-graph bundle adjustment:
The text was updated successfully, but these errors were encountered: