Skip to content

Replace usage of deprecated rlang::with_handlers() #1103

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 16, 2023

Conversation

IndrajeetPatil
Copy link
Collaborator

Use the suggested replacement instead.

Closes #1102

@codecov-commenter

This comment was marked as off-topic.

@github-actions
Copy link
Contributor

github-actions bot commented Apr 1, 2023

This is how benchmark results would change (along with a 95% confidence interval in relative change) if 57bbf4e is merged into main:

  • ❗🐌cache_applying: 62.4ms -> 197ms [+208.13%, +223.19%]
  • ❗🐌cache_recording: 839ms -> 935ms [+10.72%, +12.13%]
  •   :rocket:without_cache: 2.08s -> 1.99s [-5.88%, -3.34%]

Further explanation regarding interpretation and methodology can be found in the documentation.

@github-actions
Copy link
Contributor

github-actions bot commented Apr 1, 2023

This is how benchmark results would change (along with a 95% confidence interval in relative change) if 57bbf4e is merged into main:

  • ❗🐌cache_applying: 79.6ms -> 255ms [+217.06%, +223.07%]
  • ❗🐌cache_recording: 1.03s -> 1.16s [+10.61%, +13.14%]
  •   :rocket:without_cache: 2.49s -> 2.38s [-4.9%, -3.8%]

Further explanation regarding interpretation and methodology can be found in the documentation.

@IndrajeetPatil
Copy link
Collaborator Author

Hmm, the slowdown is real, but not sure how to avoid it since it's coming from upstream.

@lorenzwalthert
Copy link
Collaborator

Yes. That’s significant. Let’s report that upstream and see if they want to do something about it.

@IndrajeetPatil
Copy link
Collaborator Author

Yes. That’s significant. Let’s report that upstream and see if they want to do something about it.

Do you want to create an issue upstream?

Use the suggested replacement instead.

Closes #1102
@github-actions
Copy link
Contributor

github-actions bot commented Apr 4, 2023

This is how benchmark results would change (along with a 95% confidence interval in relative change) if 92b3d08 is merged into main:

  • ❗🐌cache_applying: 84.9ms -> 299ms [+244.86%, +259.53%]
  • ❗🐌cache_recording: 1.22s -> 1.39s [+11.53%, +16.19%]
  •   :rocket:without_cache: 2.89s -> 2.83s [-3.56%, -0.79%]

Further explanation regarding interpretation and methodology can be found in the documentation.

@IndrajeetPatil
Copy link
Collaborator Author

Okay, in light of the benchmark you posted in the other PR, my updated guess is that this slowdown is coming not from rlang but from an update to some other dependency.

@lorenzwalthert
Copy link
Collaborator

lorenzwalthert commented Apr 4, 2023

The benchmark in the other PR suggested that moving the checkpoint forward in time did not affect the speed of the execution. Although the {rlang} > 1.0.0 requirement you add in this PR might lead to a different dependency resolution than in #1104. Hence, my suggestion is to require {rlang} > 1.0.0 in a separate PR and see if that has an effect. My benchmark posted in r-lib/rlang actually indicates that the try_fetch() is much faster than with_handlers(): r-lib/rlang#1612.

But maybe not for our use case. The benchmark reprex I posted in r-lib/rlang#1612 was very general. So maybe we'll have to refine it.

@lionel-
Copy link
Member

lionel- commented Apr 4, 2023

Is there some backtrace capture in the error handlers? try_fetch() preserves more stack than tryCatch() or with_handers(). This makes the backtraces more informative but it can be slowed when done with rlang::trace_back() or rlang::abort().

If you call abort() from a handler, try_fetch() is definitely an improvement though. In addition to deeper backtraces this allows recover() to reach deeper (at the error site instead of the handling site).

@lorenzwalthert
Copy link
Collaborator

lorenzwalthert commented Apr 4, 2023

cc: @IndrajeetPatil.

Thanks for the info. We'll have to investigate how we call with_handlers() after ensuring it's that function call that causes the problem (and not some other dependency, cause).

@github-actions
Copy link
Contributor

github-actions bot commented Apr 5, 2023

This is how benchmark results would change (along with a 95% confidence interval in relative change) if db9bbed is merged into main:

  • ❗🐌cache_applying: 61.6ms -> 185ms [+193.24%, +206.19%]
  • ❗🐌cache_recording: 820ms -> 907ms [+10.06%, +11.26%]
  •   :rocket:without_cache: 1.98s -> 1.91s [-4.14%, -3.31%]

Further explanation regarding interpretation and methodology can be found in the documentation.

@lorenzwalthert
Copy link
Collaborator

lorenzwalthert commented Apr 9, 2023

After some digging, I think this is a {touchstone} issue.

  • When running the benchmarking script, code below, locally (after touchstone::activate()), the with_handlers() branch takes 2x as long as the new try_fetch() branch.
benchmark_run(
  expr_before_benchmark = {
    library(styler)
    cache_activate(gert::git_branch())
  },
  cache_applying = style_pkg("touchstone/sources/here", filetype = c("R", "rmd")),
  n = 30
)
  • When peeling off the {touchstone} layers and just run system.time(style_pkg("touchstone/sources/here", filetype = c("R", "rmd"))), the new branch is 2x faster. Not sure what's going on here since the benchmark is really just the expression of interest and setting iterations > 1 in {touchstone} did not make things better. Here's the relevant soure code from {touchstone}
    callr::r(
      function(expr_before_benchmark, dots, branch, block, iteration, asset_dirs) {
        withr::local_namespace("touchstone")
        withr::local_options(asset_dirs)
        eval(expr_before_benchmark)
        benchmark <- bench::mark(eval(dots[[1]]), memory = FALSE, iterations = 1)
        benchmark_write(benchmark, names(dots), branch = branch, block = block, iteration = iteration)
        benchmark
      },
      args = append(args, lst(iteration)),
      libpath = c(libpath_touchstone(branch), .libPaths())
    )

cc: @assignUser.

@lorenzwalthert
Copy link
Collaborator

lorenzwalthert commented Apr 16, 2023

I tried to run the exact same expression in the console that is ran by {touchstone}.

For current main:

bench::mark(
  style_pkg("touchstone/sources/here", filetype = c("R", "rmd")), 
  iterations = 1, memory = FALSE
)
#> Warning message:
#>   Some expressions had a GC in every iteration; so filtering is disabled. 

By default, filter_gc is TRUE, which means, according to docs: remove iterations that contained at least one garbage collection before summarizing. If TRUE but an expression had a garbage collection in every iteration, filtering is disabled, with a warning.

On the other hand, rlang-with-handlers never seemed to trigger the above warning.

The new branch rlang-with-handlers is about 2x as fast as main interactively (100ms vs 200ms) , but non-interactively it's the other way round (100ms vs 60ms). I really think this is just an artifact related to gc, so let's not bother too much since the new version should be clearly faster and the diff is very simple. Maybe we'll need some adjustments on {touchstones} side to make sure these things are accounted for somehow...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Replace deprecated rlang::with_handlers()
4 participants