-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Show timing and allocations for tests #1787
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we maybe use something less intrusive, without all the special commands? I think this leads to a quite unidiomatic and surprising test structure. Maybe we could use https://github.com/KristofferC/TimerOutputs.jl or something similar?
It is also problematic in general to write into the package directory, it might be read-only (IIRC there were also discussions about possibly making them immutable at some point in the future).
This is now the output for
I'm not sure how I can write to a temporary directory without either hardcoding the temporary filename or passing it via GitHub Actions env from one step to the next. Both seem not so ideal, so I now removed the functionality altogether. The time and allocations information is now only shown at the end of the EDIT: Wrong nesting fixed in 8430bf7. |
From
|
The wrong ordering of the sub-elements is an upstream bug. I'll try to open a PR there. MWE: julia> using TimerOutputs
julia> to = TimerOutput();
julia> @timeit to "group" begin
@timeit to "sub1" sleep(0.1)
@timeit to "sub2" sleep(0.1)
end
julia> print_timer(to; compact=true, sortby=:firstexec)
───────────────────────────────────────────────────
Time Allocations
─────────────── ───────────────
Total measured: 46.0s 2.29MiB
Section ncalls time %tot alloc %tot
───────────────────────────────────────────────────
group 1 202ms 100.0% 2.55KiB 100.0%
sub2 1 101ms 50.0% 320B 12.3%
sub1 1 101ms 50.0% 320B 12.3%
─────────────────────────────────────────────────── EDIT: PR at KristofferC/TimerOutputs.jl#144. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this much more 🙂 I think the summary can be useful, and have only minor suggestions.
Co-authored-by: David Widmann <devmotion@users.noreply.github.com>
Pull Request Test Coverage Report for Build 1935495446
💛 - Coveralls |
Codecov Report
@@ Coverage Diff @@
## master #1787 +/- ##
=======================================
Coverage 81.70% 81.70%
=======================================
Files 24 24
Lines 1492 1492
=======================================
Hits 1219 1219
Misses 273 273 Continue to review full report at Codecov.
|
You could also take a look at https://github.com/Ferrite-FEM/Tensors.jl/blob/f2d296bc4f75f803f5dca931b6dfd642637733d1/test/runtests.jl#L8-L16. Then you can just use |
@devmotion what has your preference? The |
Since we only use it in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, I left some final comments.
Co-authored-by: David Widmann <devmotion@users.noreply.github.com>
Given that we already run a lot of tests on a lot of systems, it makes sense to collect some allocations and time information. This PR collects the information during running the tests into an ordered dictionary and once the test is done, it will convert that into a DataFrame. Next, this can be pretty printed at the end of the tests and in a separate step in the GitHub Action. That separate Actions step makes it easy to see the output, that is, without having to scroll through all the logs.
This PR will also make detecting performance regressions easier when working on upstream packages with integration tests such as Libtask.jl.
In response to a suggestion by David in another place about using BenchmarkCI.jl. I've tried to apply it here too, but didn't see how that package can be based on tests. They only mention PkgBenchmark.jl which is based on putting things inside
benchmark/
(https://juliaci.github.io/PkgBenchmark.jl/stable/define_benchmarks/). Also, the code that I've added here isn't that much and can now easily be tweaked.