-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression (v2.3.0) #963
Comments
Could you please provide a reprex? It's hard to know what the problem is if you don't provide any code. |
I can reproduce a consistent slowdown (though with less magnitude) running the fs tests.
The last time this happened it was due to an increase in garbage collections due to an explicit |
There seems to be a fairly small increase in total GCs, the biggest change seemes to be an increase in total memory allocated, which would make the GCs take longer as well. # testthat 2.3.0
bench::mark(devtools::test(reporter = "check"), iterations = 5)
#> expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time result memory
#> <bch:expr> <bch> <bch:> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm> <list> <list>
#> 1 devtools::test(reporter = "check") 5.46s 5.58s 0.180 201MB 2.84 5 79 27.8s <tstt… <df[,…
# testthat 2.2.1
bench::mark(devtools::test(reporter = "check"), iterations = 5)
#> expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time result memory
#> <bch:expr> <bch> <bch:> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm> <list> <list>
#> 1 devtools::test(reporter = "check") 3.84s 3.87s 0.257 125MB 3.75 5 73 19.5s <tstt… <df[,… |
Profiling doesn't help because of the extreme depth of the call stack; @lionel-'s patch to R is likely to help, but it's most likely to be a small increase in overhead in every |
Maybe it depends on the kind of tests, for instance maybe error assertions are slower? Which package are you testing @Eluvias? |
You might be right @lionel-; I run tests on a private repo which includes many error assertions. # testhat 2.2.1
# A tibble: 1 x 13
# expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time result
# <bch:expr> <bch> <bch:> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm> <list>
# 1 devtools::test(reporter = "check") 2.31s 2.37s 0.418906 28.3MB 1.25672 5 15 11.9s <tstt~
# testhat 2.3.0
# A tibble: 1 x 13
# expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time result
# <bch:expr> <bch> <bch:> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm> <list>
# 1 devtools::test(reporter = "check") 17.9s 18.2s 0.0550637 147MB 0.539624 5 49 1.51m <tstt~ |
Error expectations are the culprit: bench::mark(
before = expect_error(stop("foo"), "foo")
)[1:8]
#> # A tibble: 1 x 8
#> expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
#> 1 before 233µs 281µs 3356. 0B 7.31 1378 3
bench::mark(
after = expect_error(stop("foo"), "foo")
)[1:8]
#> # A tibble: 1 x 8
#> expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
#> 1 after 3.79ms 4.26ms 226. 78.4KB 22.8 99 10 Most of the time is spent in |
A surprising amount of time is taken up by I wonder if We could also memoise for the duration of the top-level command, this way we allow garbage collection between invokations of devtools commands. Edit: |
Could you use |
Good idea! Down to 1.8 ms bench::mark(exists = expect_error(stop("foo"), "foo"))[1:8]
#> # A tibble: 1 x 8
#> expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
#> 1 exists 1.62ms 1.82ms 525. 0B 13.0 242 6 Just need to be a bit careful with the base namespace: ns_exports_has <- function(ns, name) {
if (is_reference(ns, base_ns_env)) {
exports <- base_pkg_env
} else {
exports <- ns$.__NAMESPACE__.$exports
}
!is_null(exports) && exists(name, envir = exports, inherits = FALSE)
} |
@Eluvias can you rerun your benchmark with the laster rlang master please? |
# testthat * 2.3.0.9000 2019-11-18 [1] Github (r-lib/testthat@8425c3f)
#> # A tibble: 1 x 8
#> expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
#>1 devtools::test(reporter = "check") 21.1s 21.1s 0.0474156 152MB 0.616403 1 13
# testthat v2.2.1
#> # A tibble: 1 x 8
#> expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
#> 1 devtools::test(reporter = "check") 2.52s 2.52s 0.396269 17.1MB 1.58508 1 4 # dev. version
== Results =====================================================================
Duration: 22.9 s
OK: 388
Failed: 0
Warnings: 0
Skipped: 0
# v2.2.1
== Results =====================================================================
Duration: 2.1 s
OK: 388
Failed: 0
Warnings: 0
Skipped: 0 |
@Eluvias Is this with the latest master of rlang? E.g. |
Apologies, I will re-run w/ the latest |
# w/ rlang dev. version
#> # A tibble: 1 x 8
#> expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
#> 1 devtools::test(reporter = "check") 4.32s 4.35s 0.229066 23.7MB 1.96997 5 43
== Results =====================================================================
Duration: 4.4 s
OK: 388
Failed: 0
Warnings: 0
Skipped: 0 |
phew it seems like the fix is good enough :) |
Yes indeed, very good; I can now switch to testthat v2.3.0. Many thanks @lionel- . |
We'll send the next rlang to CRAN within a couple of days. |
With the new version it takes more time to run the tests. I changed the reporter but still no improvement.
Do you confirm the performance regression?
Thanks.
The text was updated successfully, but these errors were encountered: