Skip to content
This repository has been archived by the owner on Nov 1, 2023. It is now read-only.

Use debuginfo cache #3280

Merged
merged 3 commits into from Jul 17, 2023
Merged

Conversation

tevoinea
Copy link
Member

@tevoinea tevoinea commented Jul 7, 2023

Summary of the Pull Request

What is this about?

Closes #3217

@codecov-commenter
Copy link

codecov-commenter commented Jul 7, 2023

Codecov Report

Merging #3280 (802f130) into main (a658fc9) will decrease coverage by 0.02%.
The diff coverage is 0.00%.

@@            Coverage Diff             @@
##             main    #3280      +/-   ##
==========================================
- Coverage   31.46%   31.45%   -0.02%     
==========================================
  Files         307      307              
  Lines       37219    37239      +20     
==========================================
  Hits        11712    11712              
- Misses      25507    25527      +20     
Impacted Files Coverage Δ
...c/agent/onefuzz-task/src/tasks/coverage/generic.rs 0.00% <0.00%> (ø)

@mgreisen
Copy link
Contributor

mgreisen commented Jul 7, 2023

Do we have tests for these changes?

@ranweiler
Copy link
Member

Please document how you observed that the pre-cache step is successfully shortening the runtime of the first coverage iteration.

It would also be interesting to test the error path for when target_exe is missing debuginfo.

Finally, if there isn't already an issue for caching other debuggable modules in the setup container, I'd consider adding one. This would help "factor out" (perf-wise) the module analysis for any linked and loaded libraries that we actually care about.

@tevoinea
Copy link
Member Author

tevoinea commented Jul 17, 2023

Using the rust example binary from onefuzz-samples.

I skipped pre-populating the cache by commenting out this line: cache.get_or_insert(&*module)?;. The first run will have no entries in the cache to use, subsequent runs will have the cache populated.

No pre-computing cache record_input elapsed time first run: 13 seconds.
After first run average elapsed time: 930 ms

This means before this PR, we're spending ~13 seconds per iteration since we don't store a cache across inputs.

With pre-populating the cache, the elapsed time for the first run is 1.2 seconds
After the first run average elapsed time: 930 ms

From 13 seconds down to 1.2, over 10 times faster

@tevoinea tevoinea enabled auto-merge (squash) July 17, 2023 20:35
@tevoinea tevoinea merged commit 9a792cf into microsoft:main Jul 17, 2023
24 checks passed
@AdamL-Microsoft AdamL-Microsoft mentioned this pull request Jul 26, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Eagerly analyze debuggable modules in coverage task
4 participants