Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coverage for executables #252

Closed
JP-Ellis opened this issue Sep 18, 2019 · 14 comments
Closed

Coverage for executables #252

JP-Ellis opened this issue Sep 18, 2019 · 14 comments

Comments

@JP-Ellis
Copy link

In the case where the crate is for a binary, cargo tarpaulin will run all the tests within the crate, but is it possible to test the executable itself (with arguments)? Or similarly, handling library with associated binaries (in src/bin/main.rs)?

Effectively, this would be in analogy to cargo run -- <args>, perhaps as cargo tarpaulin --run -- <args> or maybe a different command cargo tarpaulin-run -- <args>?

@JP-Ellis
Copy link
Author

One comment:

It might be possible to have tests which execute the binary under crate/tests. For example

/// Get name of the binary
fn bin() -> path::PathBuf {
    let root = env::current_exe()
        .unwrap()
        .parent()
        .unwrap()
        .parent()
        .unwrap()
        .to_path_buf();
    if cfg!(target_os = "windows") {
        root.join("my-binary.exe")
    } else {
        root.join("my-binary")
    }
}

#[test]
fn test_my_binary() {
    let mut cmd = process::Command::new(bin());
    cmd.arg(...);

    match cmd.status() {
        Err(e) => {
            println!("Error: {}.", e);
            panic!();
        }
        Ok(status) => {
            println!("Exited with code {:?}.", status.code());
            assert!(!status.success());
        }
    }
}

but cargo tarpaulin doesn't follow the spawned program (unlike kcov).

@xd009642
Copy link
Owner

xd009642 commented Nov 3, 2019

I now have support running tarpaulin on benchmarks and examples in the develop branch, running on arbitrary binaries is still unsupported but something I'm actively working towards

@dbrgn
Copy link

dbrgn commented Jan 21, 2020

I'm working on a application server project where we have a few unit tests but a lot of integration tests. The tests are written in Python and simply run against the Rust server running on localhost.

It would be great if we could track coverage of the server binary (tarpaulin run) when running integration tests, and ideally merge the results together with the unit tests.

@pickfire
Copy link

pickfire commented Jun 6, 2020

How does one run tests for project with multiple binaries?

@xd009642
Copy link
Owner

There's a WIP PR for this if people want to test it, or comment on the UI I put in the comment #604

@xd009642
Copy link
Owner

Also as per my last comment on this repo the benchmark and example stuff changed... I realised the semantics of cargo test --examples actually looks for test annotated functions in examples, it doesn't run the example binaries directly. Same with benchmarks, so they were changed to match cargo test behaviour

@xd009642
Copy link
Owner

xd009642 commented Nov 3, 2020

You can now do this via --command build which will call cargo build and take the binary you build and run it with the args. Still very much an "alpha" feature but I've tried it on a few projects and it works. Also while making it fixed another issue so pushed out a release.

So feel free to try in 0.16.0 and if there are any issues if you comment on this more recent issue #507 😄

@dbrgn
Copy link

dbrgn commented Nov 19, 2020

Thanks @xd009642! Excited to test this!

My integration test suite is written in Python. It simply does calls against the main binary being run (a WebSocket server).

If I understand things correctly, I would run tarpaulin with --command build and then run the test suite against the started server, right? The problem is that the server does not stop by itself, and if I send SIGINT with Ctrl+C then both the binary and tarpaulin are stopped. I guess I'd have to modify the server so that it can be stopped from the outside (e.g. with code only built when the "coverage-testing" feature is enabled)?

When running tarpaulin both with --command test and --command build, is there a way to merge the resulting coverage information?

@xd009642
Copy link
Owner

xd009642 commented Nov 19, 2020

@dbrgn Yeah I guess you'd have to modify it for a close message. There is a --forward-signals option for tarpaulin that forwards some signals to the test executable but I don't think it could help here 🤔. And you can merge the two with a config file, I think this should work in a tarpaulin.toml

[py_tests]
command = "build"

[rust_tests]
command = "test"

@xd009642
Copy link
Owner

@dbrgn oh another thought is that you can use the more verbose logging to get the test PID and send the SIGINT to the test itself bypassing tarpaulin. Then with --forward-signals it might work, if not tarpaulin will possible just complain about an unexpected signal and not give you any results 🤔

I need to figure out some smarter semantics for the signalling stuff... I've opened #622 to track any work on it

@dbrgn
Copy link

dbrgn commented Nov 19, 2020

I now just shut down the server with std::process::exit(0) when I receive a certain message from a client (but of course only behind an optional feature flag). That seems to work! 🎉 (Depending on the size of the integration test suite, the timeout needs to be raised as well.) And finally I can confirm that while unit tests in my project only cover 30%, the integration tests cover 70% 🙂

@dbrgn
Copy link

dbrgn commented Nov 19, 2020

@xd009642 after some more thought, maybe tarpaulin could allow running a custom shell command (hooks?) before the binary has been started, after it has been started and after it has been stopped? Maybe even with the PID being passed to the hooks. That way, the integration tests could automatically be started after the binary was started, and if the testsuite has access to the PID it could send a signal after it's done.

@xd009642
Copy link
Owner

Maybe I'd have to think about it, feel free to open an issue for the feature where we can always discuss it further 😄

@dbrgn
Copy link

dbrgn commented Nov 19, 2020

Done! #623

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants