-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to benchmark a top-level binary #2
Comments
Thanks for your interest in iai-callgrind. I'm not sure if I understand correctly. Could you give me an example with a little bit more context and how you would expect the command line of
Afaik, callgrind does not support recursive process spawns. You would get the event counts for the process spawn itself but not the event counts for what is happening inside the subprocess. There's currently only a hacky way to run a binary directly with
would run the binary Is this basically what you're looking for? Just with the possibility to pass arguments to the binary? |
I would like an API within theiai-callgrind Rust bench API to setup a fixture, run a binary with arbitrary arguments, and then do clean up on the fixture. Take my project
I would want
|
Thanks! Your description helped me a lot. I think all of your points should be possible and would be a great addition to the framework. However, this change would be fairly big and the final implementation may take some weeks. |
I'm pretty close to finish the implementation, so if you like to, it would be a great time to hear your feedback and opinion. You can have a look at the implementation on the https://github.com/Joining7943/iai-callgrind/tree/2-ability-to-benchmark-a-top-level-binary branch. I've updated the README there and most of the new stuff is described in the https://github.com/Joining7943/iai-callgrind/tree/2-ability-to-benchmark-a-top-level-binary#binary-benchmarks section. If you want to try it out with your crate, you can check out the |
By "only needed for development", are you saying that this is a short term hack while this is still being worked on? If not, this would be a no-go to require people install an extra binary. Haven't given it a try yet but some quick thoughts
|
Gave it a quick go. You can track my side at crate-ci/typos#783 The first problem I ran into is that my |
cd typos
cargo install --git 'https://github.com/Joining7943/iai-callgrind.git' --branch '2-ability-to-benchmark-a-top-level-binary' --root /tmp iai-callgrind-runner
IAI_CALLGRIND_RUNNER=/tmp/bin/iai-callgrind-runner cargo bench --package typos-cli --bench cli to install the binary into However, there's room for improvement. I thought about installing the runner into a directory (maybe $HOME/.cache/iai-callgrind-runner or somewhere in the target directory) just in case it cannot be found in the PATH. Also managing different runner version that way shouldn't be a big problem.
In general, criterion benchmarks and walltime benchmarks are very different from
I like to have the possibility of sandboxing because it ensures better and more deterministic result. However, I can add a switch like Assuming a fixtures directory in iai_callgrind::main!(
fixtures = "benches/fixtures", follow_symlinks = true;
run = cmd = "typos", args = ["fixtures/words.csv"]
); Note that your benchmark as it currently is would work in sandbox mode because the
You can simply specify the name of the binary, in your case
The I noticed that |
Hi @epage. I integrated some changes. Here's what I added since the last time, I wrote:
With all that in place you should now be able to run benchmarks for the Your cli benchmark could be written in many ways. For example pub static CORPUS: &str = include_str!("../../typos-dict/assets/words.csv");
fn setup() {
std::fs::write("words.csv", CORPUS);
}
iai_callgrind::main!(
setup = setup;
sandbox = false;
run = cmd = "typos",
opts = Options::default().exit_with(ExitWith::Code(2)),
args = ["words.csv"]
); Note that the above example would also work if iai_callgrind::main!(
fixtures = "benches/fixtures", follow_symlinks = true;
run = cmd = "typos",
opts = Options::default().exit_with(ExitWith::Code(2)),
args = ["fixtures/words.csv"]
); |
For me, the biggest issue is that I'm a bit adverse to doing everything within a macro as that is a bit more obtuse to work with (everything is dependent on how well the documentation is written vs what rustdoc can extract automatically and when that fails I have to decide how much I'm willing to dig into macros) and I feel like this design would be encouraging bad practices by using a more JUnit style of global and local setup and teardown, making it likely that people will put too much in the global one. |
Macros might be a little bit obtuse to work, but a builder like api on the other side feels excessive. I really understand your concerns and autocompletion, formatting and rustdocs aren't working that great within macros. However, I took care that the compiler error messages are as helpful as they can be and guide you a little bit through the macro.
That's what I've done in the README. The macro is completely documented there. I'm going to update the outdated library documentation and add some missing docs of some library functions, so from a documentation point of view, there should be nothing missing. I think the current implementation delivers all needed functionality to comfortably benchmark binaries with reliable results. I'm going to have a look into a builder like api as an alternative to the macro api but I'll postpone this to after merging this branch.
All I can do is providing the tools to create different kind of setups for different kind of binaries. |
Thanks for working to improve on iai!
I'm looking to do deterministic end-to-end benchmarking. It'd be great if I could have my target binary and flags passed directly to callgrind. First off, I'm unsure if callgrind is recursive for process spawns. Even if it is, I'd still have the overhead of the "bench" function.
The text was updated successfully, but these errors were encountered: