Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmarking infrastructure #61

Merged
merged 5 commits into from
May 21, 2020
Merged

benchmarking infrastructure #61

merged 5 commits into from
May 21, 2020

Conversation

marcrasi
Copy link
Collaborator

This adds a benchmark for Pose2SLAM using NonlinearFactorGraph, Gauss-Newton, and CGLS.

I added a section of the readme showing how to run it.

Since this adds an external dependency, I committed a Package.resolved that pins us to a specific commit. swift package update updates this to newer commits.

Copy link

@saeta saeta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

// The linear solver is 100 iterations of CGLS.
suite.benchmark(
"NonlinearFactorGraph, Intel, 5 Gauss-Newton steps, 100 CGLS steps",
settings: .iterations(1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Results are going to be quite noisy if you only run a single iteration. Is it not practical to run it for longer?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is benchmarking a whole solver, and it's pretty slow right now (~12 seconds).

Copy link

@shabalind shabalind May 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks for the clarification.

settings: .iterations(1)
) {
var val = intelDataset.initialGuess
for _ in 0..<5 {
Copy link

@shabalind shabalind May 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually, it's better to avoid loops in benchmarks. Instead, you can put just the loop body as a benchmark, and then customize number of iterations with .iterations (or better leave it off to be automatically detected by the framework).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loop is part of the solver logic that we are benchmarking, so I think it makes sense to have it in the benchmark. Eventually we're going to have an API for the solver and the benchmark body will look like solve(graph, initialGuess) and this loop will be hidden inside the solve function.

Copy link
Collaborator

@ProfFan ProfFan May 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also in the final solver this should be a while error < tol && i < max_iteration

Copy link
Collaborator

@ProfFan ProfFan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM and merge at will :)

@marcrasi marcrasi merged commit f923d65 into master May 21, 2020
@marcrasi marcrasi deleted the feature/benchmarking branch May 21, 2020 23:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants