Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing strategy for rustc_save_analysis #34481

Closed
aochagavia opened this issue Jun 26, 2016 · 9 comments
Closed

Testing strategy for rustc_save_analysis #34481

aochagavia opened this issue Jun 26, 2016 · 9 comments
Labels
A-save-analysis Area: saving results of analyses such as inference and borrowck results to a file. A-testsuite Area: The testsuite used to check the correctness of rustc C-enhancement Category: An issue proposing an enhancement or a PR with one. T-dev-tools Relevant to the dev-tools subteam, which will review and decide on the PR/issue.

Comments

@aochagavia
Copy link
Contributor

Context

There is currently only one test for save_analysis. This test ensures that the compiler doesn't crash when dumping crate information as a text file. The lack of further tests allows bugs to be (re)introduced (e.g. #33213) and makes it difficult to make modifications to the API with confidence. It is important to increase the reliability of save_analysis, since the future RLS will be implemented on top of it.

Requirements

Ideally, we should come up with a clear testing architecture to deal with this problem. It would need to:

  • Run the compiler on a given program and extract the analysis information: a possible way to do this would be to write a struct that implements the Dump trait and collects all available information about a program in Vecs (see below).
struct Analysis {
    pub struct_defs: Vec<StructData>,
    pub trait_defs: Vec<TraitData>,
    pub globs: Vec<GlobData>, 
    ...
}
  • Compare the obtained analysis information to previously defined criteria. We could start by testing names, spans and correct cross-referencing.

Integrating with rustc's testing system

Currently, the only test is in src/test/run-make/save-analysis. This makes sense for checking whether ICEs are triggered, but is probably unsuitable for the fine-grained testing approach described above.

We could follow the approach of rustdoc and the pretty printer. For instance, we could add a new directory (src/test/save-analysis) and put the tests there. We should probably begin with single-file tests.

A possible way to encode the constraints of the tests is through comments, as shown below:

fn foo() {
// ^^^, function definition, name: foo
}

fn main() {
// ^^^^, function definition, name: main
    foo();
 // ^^^, function call, name: foo, references: 1.3-1.5
}

Notation:

  • ^^^: the span of the item.
  • 1.3-1.5: a span beginning on the third column of the first line and ending on the fifth column of the first line.
@aochagavia
Copy link
Contributor Author

cc @nrc. What do you think of the testing approach? Who can I ask for feedback on the integration with the testing system?

@brson
Copy link
Contributor

brson commented Jun 27, 2016

Looks reasonable to me. Agree it should be its own set of tests.

@brson brson added the A-testsuite Area: The testsuite used to check the correctness of rustc label Jun 27, 2016
@aochagavia
Copy link
Contributor Author

@brson What should I do to let the build system run the tests? Do we need to mess with make files?

@nrc
Copy link
Member

nrc commented Jun 27, 2016

Agree on a separate set of tests. Beyond that I think there are a few fundamental questions to address, starting with - what output should be tested? We can test the API which is probably the lowest level to test, we could test the internal dumpers, which is probably the maximal amount of info to test with the minimal amount of processing, or we could test the output of -Zsave-analysis, which is probably the easiest thing to test (since it needs the least automation) but is the maximal amount of processing (which I guess might also be an advantage).

We then need to think about how tests can be generated. My worry here is that we'll put a lot of effort into the infrastructure, but that without the leg-work to actually write individual tests, it will go to waste.

I think my preference is to test the -Zsave-analysis output. This should be quite easy and ergonomic - a test file would be the program to be compiled plus a JSON struct which must be present in the output (or at least there must be a super-struct of it in the output). I think that makes tests easiest to hand-write and auto-generate. It might also be nice to have some way to specify spans by inline comments, rather than having to spell them out inside the JSON, but that could be step 2.

@nrc
Copy link
Member

nrc commented Jun 27, 2016

cc @rust-lang/tools

@alexcrichton
Copy link
Member

We've had a pretty big amount of success so far having the process of adding a test look like:

  • Drop a file in a directory
  • Add some comments here and there about the assertions

So along those lines the testing strategy sounds great to me. This'll probably just be a small addition to the compiletest test runner and a new src/test/save-analysis directory with a bunch of test cases. There's already a number of specialized modes (e.g. run-make, codegen, incremental, ...) so adding another shouldn't be too hard (lots of examples as well).

I agree with @nrc though that we wouldn't want the test cases to be too onerous to write, but @nikomatsakis has drummed up a script for the ui tests where you can generate the expected output, so perhaps that could be done here as well? That is, have a script that can generate the output once, it can be hand verified, and then it prevents regressions?

@aochagavia
Copy link
Contributor Author

I have added a WIP implementation on #34522

@Mark-Simulacrum Mark-Simulacrum added the C-enhancement Category: An issue proposing an enhancement or a PR with one. label Jul 25, 2017
@nrc nrc added the T-dev-tools Relevant to the dev-tools subteam, which will review and decide on the PR/issue. label Jan 28, 2018
@nrc
Copy link
Member

nrc commented Jan 29, 2018

Jotting down my recent thoughts on testing the save-analysis output (we still have nothing beyond the smoke tests in run-make):

Requirements:

  • test arbitrary code
  • test that info appears in the save-analysis output
  • don't require exact text match of save-analysis output
  • test that info appears at a given location (can assume non-macro to start with)
  • associate a def and ref (possibly in different files) without specifying an id

I think an ideal way of doing this is to have a test directory with one program per file, then use comments to identify spans and the info we expect at a given span. However, I'm not sure how to specify more advanced info such as relating a def to a ref or that we expect some info to be the children of other info, etc.

An alternative would to match source files with expected output files (by filename, I expect), but I think we'd need to normalise for whitespace and other JSON structure and to allow wildcards and possibly variables (e.g., say that the id field of a def is the same as a ref without specifying the value). We'd probably want the expected output to be a minimum expected set, rather than a complete set too.

@jyn514
Copy link
Member

jyn514 commented Feb 19, 2023

Save-analysis has been removed from the compiler.

@jyn514 jyn514 closed this as not planned Won't fix, can't repro, duplicate, stale Feb 19, 2023
@jyn514 jyn514 added the A-save-analysis Area: saving results of analyses such as inference and borrowck results to a file. label Feb 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-save-analysis Area: saving results of analyses such as inference and borrowck results to a file. A-testsuite Area: The testsuite used to check the correctness of rustc C-enhancement Category: An issue proposing an enhancement or a PR with one. T-dev-tools Relevant to the dev-tools subteam, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

6 participants