Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom test frameworks #816

alexcrichton opened this issue Feb 6, 2015 · 13 comments

Custom test frameworks #816

alexcrichton opened this issue Feb 6, 2015 · 13 comments


Copy link

@alexcrichton alexcrichton commented Feb 6, 2015

The compiler currently supports the --test compiler flag which will create a binary linked to the distributed test crate. The crate, however, is quite simplistic. There are a good number of possible extensions to the testing infrastructure which we'd love to have but the internal test crate may not be the best place for them.

It would be great if the compiler had an interface such that a custom test framework was supported. This could perhaps be a plugin, or it could perhaps be some other custom method. The resulting strategy should likely also have some Cargo integration to make using custom test frameworks as seamless as possible.

Some related topics (feel free to add more!)

Copy link
Member Author

@alexcrichton alexcrichton commented Feb 6, 2015

Another point to consider for this is a benchmarking infrastructure. Right now our benchmarking requires writing extern crate test but we don't necessarily want to stabilize this crate just yet. This would give us a nice valve to distribute the benchmarking infrastructure on

Copy link

@ekiwi ekiwi commented May 8, 2015

I would like to add an additional use-case:

A deeply embedded target like a microcontroller has the following constraints:

  • only libcore available
  • only stack no heap memory allocation
  • restricted code size (common flash sizes are 128kB)
  • no support for stack unwinding, thus no panic
  • because of size constraints, stack unwinding might never be feasible
  • heap allocation is not too hard to make work, it would just make this more
    universal, if we did not require it

Thus a custom test harness is needed. It might have to following properties:

  • does not allocate dynamic memory
  • does not use panic
  • returns compact error descriptions (maybe error numbers)
  • makes it possible to split up tests into several binaries
    because one monolithic binary for all tests might not fit into program memory
  • there might be a component running on the host pc, that loads the
    tests onto the microcontroller and collects test results e.g. via USB

The goals/observations here are:

  • the current test syntax should be kept the same (I think that it's nice to
    have a common way of writing tests in the rust community)
  • the test harness should be able to be made up of at least two different
    binaries: one that runs on the host pc and another one that runs on the
  • there needs to be a mechanism to be able to compile only a certain
    number of tests, so that we can make sure that they still fit
    on the target

Thus, for this use case to work, it does not suffice to be able to
use a custom libtest, but the mechanism (from what I could tell) hard
coded intorustc to create test runners needs to have some more flexibility.

Copy link

@reem reem commented May 8, 2015

cc me

Copy link

@ruuda ruuda commented May 8, 2015

One thing that I would love to see is support for test samples. I have a directory with a lot of files that a certain function should be able to process. At the moment, I test this in a single test by enumerating all files and calling the function, but this has several disadvantages:

  • If the test fails, it is hard to see for which sample it failed; there is no individual ok/fail per test sample. It is possible to print this to stdout, but that is suppressed by default.
  • It is not possible to see intermediate progress. It can be done with stdout but it is suppressed by default.
  • It is not possible (within a single test) to test multiple samples in parallel.

A framework that can run the same test for multiple inputs, and report progress and ok/fail per input, would be especially useful for regression tests, similar to the compile-fail and compile-pass tests for Rust’s make check. Crates that could benefit from this include parsers, compilers, decoders (images, audio, video, …), deserialisers, etc.

I am not sure what the api would look like. Perhaps something like

#[test(files = "testsamples/*.dat")]
fn parse_pass(path: &Path) { … }
Copy link

@ekiwi ekiwi commented May 9, 2015

So I was a little bit tired yesterday. I rephrased my use case above to make it easier to read. If anyone is interested in this, I could devote some more time into working out, how exactly running tests on deeply embedded targets could work and what implications this would have for rustc.

Copy link

@grissiom grissiom commented May 11, 2015

Hi @ekiwi , I'm interested in test framework in deeply embedded targets. But I'm a afraid that we are facing 2 questions here:

1, get Rust run on the board(with or without an RTOS as backend).
2, implement the test framework.

For the split tests, I think the test framework should define multiple "test targets" such as: "test1", "test2", "test_boot", "test_led" and so on and make different binaries for each target.

Copy link

@ekiwi ekiwi commented May 11, 2015

@grissiom cool! I've opened a repository for further discussion, so that we do not have to spam this issue any further.

1, get Rust run on the board(with or without an RTOS as backend).

While there is still some work to be done in order to get programs using only cargo and rustc to run on a micro controller, it is already possible if you don't mind linking to c-code.
See e.g. Antoin's Rust Cortex M4 Demo

I will try to write some more about the different effort to run rust on micro controllers in the repository mentioned above.

2, implement the test framework.

That's what I'm most interested in at the moment. As mentioned above, rust code runs "good enough" on micro controllers to start that work.

For the split tests, I think the test framework should define multiple "test targets" such as: "test1", "test2", "test_boot", "test_led" and so on and make different binaries for each target.

That might be a possible solution. However: I really want to stick to the standard rust syntax. If there is some additional functionality, that needs to be added, we should discuss that with the rust community.

So @grissiom and anyone else who is interested: feel free to open issues/pull requests on utest-rs!

Copy link

@lilith lilith commented Sep 10, 2016

I would like to add a feature to the standard test runner, namely:

  1. Inject #![feature(alloc_system)] and extern crate alloc_system;
  2. Invoke the test executable through valgrind (on linux/mac), or enable heap checking (windows).

At the moment, it would seem that I need to make a custom test framework (or fork Cargo) to achieve these goals?

Copy link

@jsgf jsgf commented Nov 3, 2016

To integrate into our test system, I need to be able to:

  1. generate a list of all the tests/benchmarks without actually running them (machine parseable)
  2. generate the test results in a machine-parseable form (including debug output/stack traces from failures)
  3. run only one test at a time (which should be possible with the info from 1)

This seems like a fairly small extension on the current test framework; I'm not sure we need much more than that (all the heavy lifting is elsewhere).

Copy link

@kevincox kevincox commented Nov 4, 2016

My use case is that I have a directory of test files. Right now I am looping over them in one "test" and reporting errors. This works but has a number of downsides.

  • Passing tests don't have their output suppressed so I have to manually suppress/print debugging information when I detect failure.
  • I have to keep track if a test has failed to that I can panic at the end.
  • If I want granular test running I would have to implement that manually.

Basically I'm recreating a bunch of work that the default test runner already does.

Right now I see two work arounds:

  • Use a build script to generate the test functions.
  • Write a macro and list the cases manually.

Both of which have pretty large downsides.

For my use case something as simple as a #[test_collector] annotation would work. Then that function can return a Vec<Box<FnOnce()->()>> or something. That would allow the test runner to then run these the same way as it runs top-level tests.

This is a different approach then @jsgf suggested, as it is extending the test runner rather then replacing it however this seems like a huge enhancement and it doesn't preclude using a custom runner either. Also you can do a custom runner right now on nightly (you can implement your own test like attribute) because rust tests are "just a binary". So while making that easier would be a good thing I think it is a separate issue.

Copy link

@devurandom devurandom commented May 21, 2017

Would something like cargo-test-junit fit into the idea of "test frameworks" you are designing?

Copy link

@jonhoo jonhoo commented Dec 8, 2017

I wrote up a summary of all the discussions I could find about a custom testing framework over at

@petrochenkov petrochenkov added the T-libs label Jan 30, 2018
Copy link

@Centril Centril commented Feb 24, 2018

As there is currently being discussed in RFC #2318 and in other places, I'm closing this in favor of those places.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.