Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upTesting strategy for rustc_save_analysis #34481
Comments
This comment has been minimized.
This comment has been minimized.
|
cc @nrc. What do you think of the testing approach? Who can I ask for feedback on the integration with the testing system? |
This comment has been minimized.
This comment has been minimized.
|
Looks reasonable to me. Agree it should be its own set of tests. |
brson
added
the
A-testsuite
label
Jun 27, 2016
This comment has been minimized.
This comment has been minimized.
|
@brson What should I do to let the build system run the tests? Do we need to mess with make files? |
This comment has been minimized.
This comment has been minimized.
|
Agree on a separate set of tests. Beyond that I think there are a few fundamental questions to address, starting with - what output should be tested? We can test the API which is probably the lowest level to test, we could test the internal dumpers, which is probably the maximal amount of info to test with the minimal amount of processing, or we could test the output of -Zsave-analysis, which is probably the easiest thing to test (since it needs the least automation) but is the maximal amount of processing (which I guess might also be an advantage). We then need to think about how tests can be generated. My worry here is that we'll put a lot of effort into the infrastructure, but that without the leg-work to actually write individual tests, it will go to waste. I think my preference is to test the -Zsave-analysis output. This should be quite easy and ergonomic - a test file would be the program to be compiled plus a JSON struct which must be present in the output (or at least there must be a super-struct of it in the output). I think that makes tests easiest to hand-write and auto-generate. It might also be nice to have some way to specify spans by inline comments, rather than having to spell them out inside the JSON, but that could be step 2. |
This comment has been minimized.
This comment has been minimized.
|
cc @rust-lang/tools |
This comment has been minimized.
This comment has been minimized.
|
We've had a pretty big amount of success so far having the process of adding a test look like:
So along those lines the testing strategy sounds great to me. This'll probably just be a small addition to the I agree with @nrc though that we wouldn't want the test cases to be too onerous to write, but @nikomatsakis has drummed up a script for the ui tests where you can generate the expected output, so perhaps that could be done here as well? That is, have a script that can generate the output once, it can be hand verified, and then it prevents regressions? |
This comment has been minimized.
This comment has been minimized.
|
I have added a WIP implementation on #34522 |
alexcrichton
referenced this issue
Jun 28, 2016
Closed
WIP: infrastructure for testing save-analysis #34522
Mark-Simulacrum
added
the
C-enhancement
label
Jul 25, 2017
nrc
added
the
T-dev-tools
label
Jan 28, 2018
This comment has been minimized.
This comment has been minimized.
|
Jotting down my recent thoughts on testing the save-analysis output (we still have nothing beyond the smoke tests in run-make): Requirements:
I think an ideal way of doing this is to have a test directory with one program per file, then use comments to identify spans and the info we expect at a given span. However, I'm not sure how to specify more advanced info such as relating a def to a ref or that we expect some info to be the children of other info, etc. An alternative would to match source files with expected output files (by filename, I expect), but I think we'd need to normalise for whitespace and other JSON structure and to allow wildcards and possibly variables (e.g., say that the id field of a def is the same as a ref without specifying the value). We'd probably want the expected output to be a minimum expected set, rather than a complete set too. |
aochagavia commentedJun 26, 2016
Context
There is currently only one test for
save_analysis. This test ensures that the compiler doesn't crash when dumping crate information as a text file. The lack of further tests allows bugs to be (re)introduced (e.g. #33213) and makes it difficult to make modifications to the API with confidence. It is important to increase the reliability ofsave_analysis, since the future RLS will be implemented on top of it.Requirements
Ideally, we should come up with a clear testing architecture to deal with this problem. It would need to:
Dumptrait and collects all available information about a program inVecs (see below).Integrating with rustc's testing system
Currently, the only test is in
src/test/run-make/save-analysis. This makes sense for checking whether ICEs are triggered, but is probably unsuitable for the fine-grained testing approach described above.We could follow the approach of rustdoc and the pretty printer. For instance, we could add a new directory (
src/test/save-analysis) and put the tests there. We should probably begin with single-file tests.A possible way to encode the constraints of the tests is through comments, as shown below:
Notation:
^^^: the span of the item.1.3-1.5: a span beginning on the third column of the first line and ending on the fifth column of the first line.