Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
110 lines (83 sloc) 3.81 KB


Gradient descent not fast enough? Tired of managing memory and juggling template parameters to interface with your favorite nonlinear solver in C++?

OpTorch lets you write your cost functions as PyTorch modules and seamlessly optimize them in ceres, Google's industrial strength solver. We use OpTorch for automatic ground-truthing at Pronto, but there may be bugs or poor performance for use cases we haven't considered — we want to make OpTorch the fastest and easiest to use nonlinear solver frontend, so Issues and PRs are welcome!


pip install optorch

API Docs


Optimizing single parameter, single residual:

import torch
import optorch

class SimpleCost(torch.jit.ScriptModule):
    def forward(self, x):
        return 10 - x

x = torch.tensor(0., dtype=torch.float64)

problem = optorch.Problem()
problem.add_residual(SimpleCost(), x)

print(f'final x: {x.item()}')
$ python
started optorch main
iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  5.000000e+01    0.00e+00    1.00e+01   0.00e+00   0.00e+00  1.00e+04        0    6.16e-02    6.16e-02
   1  4.999000e-07    5.00e+01    1.00e-03   1.00e+01   1.00e+00  3.00e+04        1    3.90e-04    6.20e-02
   2  5.554074e-16    5.00e-07    3.33e-08   1.00e-03   1.00e+00  9.00e+04        1    1.14e-04    6.22e-02 Terminating: Parameter tolerance reached. Relative step_norm: 3.332852e-09 <= 1.000000e-08.
final x: 9.99999996667111

Fitting a circle, snipped from

class DistanceFromCircleCost(torch.jit.ScriptModule):
    def __init__(self, xx, yy):
        # constant parameters
        self.xx = torch.nn.Parameter(xx)
        self.yy = torch.nn.Parameter(yy)

    def forward(self, x, y, m):
        # radius = m^2
        r = m * m
        # position of sample in circle's coordinate system
        xp = self.xx - x
        yp = self.yy - y
        # nicer, more convex loss compared to r - sqrt(xp^2 + yp^2)
        return r*r - xp*xp - yp*yp

pts = # generate noisy circle points at (20, -300) radius = 45
# initial estimates
x = torch.tensor(0.)
y = torch.tensor(0.)
# parameterize radius as m^2 so it can't be negative
m = torch.tensor(1.)
for xx, yy in pts:
    cost = DistanceFromCircleCost(torch.tensor([xx]), torch.tensor([yy]))
    problem.add_residual(cost, x, y, m)
problem.max_iterations = 200
print(f'final: {x.item()} {y.item()} {(m*m).item()}')
final: 21.31574249267578 -301.0490417480469 45.86789997422807


Benchmarks were run for pose graph optimization on the MIT Parking Garage dataset. Results below are for 250, 500, and 1000 vertices. Y axis is translational error.

250 (307 edges) 500 (615 edges) 1000 (2635 edges)

As you can see, OpTorch is quite a bit slower than g2opy, but consistently finds lower losses. If you're just doing SLAM, you probably want g2opy, but if you need custom loss functions, OpTorch will allow you to write them in Python.


The OpTorch codebase is released by Pronto AI under the MIT license.

The binary packages available on PyPI are statically linked to:

You can’t perform that action at this time.