New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Debugger experiments #11441
Debugger experiments #11441
Conversation
To be able to implement multiple debugger in an organized way.
is there a way to move forward with this? or should we leave it until after the next release? |
It's almost ready, I just wanted to make the profiler a bit more useful. |
@kubouch great news ! |
Hi @kubouch, I just noticed something that I'm not totally sure is right or wrong: In crates/nu-engine/src/eval.rs, if failed {
// External command failed.
// Don't return `Err(ShellError)`, so nushell won't show an extra error message.
return Ok(output);
} but I notice that in these cases, |
I might have missed that, it's not intentional. |
Description
This PR adds a new evaluator path with callbacks to a mutable trait object implementing a Debugger trait. The trait object can do anything, e.g., profiling, code coverage, step debugging. Currently, entering/leaving a block and a pipeline element is marked with callbacks, but more callbacks can be added as necessary. Not all callbacks need to be used by all debuggers; unused ones are simply empty calls. A simple profiler is implemented as a proof of concept.
The debugging support is implementing by making
eval_xxx()
functions generic depending on whether we're debugging or not. This has zero computational overhead, but makes the binary slightly larger (see benchmarks below).eval_xxx()
variants called from commands (likeeval_block_with_early_return()
ineach
) are chosen with a dynamic dispatch for two reasons: to not grow the binary size due to duplicating the code of many commands, and for the fact that it isn't possible because it would make Command trait objects object-unsafe.In the future, I hope it will be possible to allow plugin callbacks such that users would be able to implement their profiler plugins instead of having to recompile Nushell. DAP would also be interesting to explore.
Try
help debug profile
.Screenshots
Basic output:
To profile with more granularity, increase the profiler depth (you'll see that repeated
is-windows
calls take a large chunk of total time, making it a good candidate for optimizing):Benchmarks
Binary size
Binary size increase vs. main: +40360 bytes. (Both built with
--release --features=extra,dataframe
.)Time
cargo run --release -- bench_debug.nu
is consistently 1--2 ms slower thancargo run --release -- bench_nodebug.nu
due to the collection overhead + gathering the report. This is expected. When gathering more stuff, the overhead is obviously higher.cargo run --release -- bench_nodebug.nu
vs.nu bench_nodebug.nu
I didn't measure any difference. Both benchmarks report times between 97 and 103 ms randomly, without one being consistently higher than the other. This suggests that at least in this particular case, when not running any debugger, there is no runtime overhead.API changes
This PR adds a generic parameter to all
eval_xxx
functions that forces you to specify whether you use the debugger. You can resolve it in two ways:eval_block(&engine_state, ...)
, calllet eval_block = get_eval_block(&engine_state); eval_block(&engine_state, ...)
eval_block::<WithoutDebug>(&engine_state, ...)
(this is the case of hooks, for example).I tried to add more explanation in the docstring of
debugger_trait.rs
.TODO
each
TODO: DEBUG
commentsdebug profile
, explaining all columnsUser-Facing Changes
Hopefully none.
Tests + Formatting
After Submitting