Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test.py error tests should support context messages from the analyzer #44905

Closed
stereotype441 opened this issue Feb 9, 2021 · 11 comments
Closed
Assignees
Labels
area-analyzer Use area-analyzer for Dart analyzer issues, including the analysis server and code completion. P2 A bug or feature request we're likely to work on

Comments

@stereotype441
Copy link
Member

stereotype441 commented Feb 9, 2021

(Parent issue #44897)

As of a42244f, the analyzer now generates context messages in some circumstances explaining why type promotion failed. These context messages are not currently exposed in the analyzer's machine readable output format, and consequently the test.py error test infrastructure can't be used to test them. We would like to expose them so that we can test them using language tests.

We discussed this yesterday and we believe the best way to go about supporting them is to first add a JSON output format to command-line analyzer containing these context messages (as well as other information supported by the analyzer's current machine readable output format), and then to switch test.py over to consuming this new JSON format.

Related: #44902, which will probably use the same JSON format.

@stereotype441 stereotype441 added the area-analyzer Use area-analyzer for Dart analyzer issues, including the analysis server and code completion. label Feb 9, 2021
@stereotype441
Copy link
Member Author

I've initially classified this issue as area-analyzer since the JSON work needs to happen first. I'm not sure who it the best assignee for this bug--maybe @bwilkerson?

@scheglov scheglov added the P2 A bug or feature request we're likely to work on label Feb 9, 2021
@devoncarew
Copy link
Member

devoncarew commented Feb 9, 2021

Paul, I'm trying to get a sense of how to prioritize this work. Is this about being able to test these specific context messages, or test them through test.py? I feel like we must already have a way to test context messages (via analyzer package:test tests or other). @bwilkerson would likely know for sure.

@bwilkerson
Copy link
Member

We can definitely write analyzer-specific tests that test for context messages. Not being able to test analyzer via test.py won't stop us from testing the functionality, but it will make someone do extra work to write those tests (compared to the alternative, which is to copy a line in the test files test.py runs). It might well be less work overall to support a json format from the command-line tool and update test.py to use that format, especially if we want to eventually have that functionality anyway.

@stereotype441
Copy link
Member Author

We can definitely write analyzer-specific tests that test for context messages. Not being able to test analyzer via test.py won't stop us from testing the functionality, but it will make someone do extra work to write those tests (compared to the alternative, which is to copy a line in the test files test.py runs). It might well be less work overall to support a json format from the command-line tool and update test.py to use that format, especially if we want to eventually have that functionality anyway.

@devoncarew @bwilkerson agreed. The "why not promoted" logic is going to have to address a significant number of specific cases--roughly, all the reasons why a promotion might be prevented or cancelled, times all the different errors that could arise from a non-promoted variable. I wouldn't be surprised if that works out to be something like 100 test cases in total. Implementing support for each of those cases is going to require work in _fe_analyzer_shared, analyzer, and front_end. It will be a lot easier to be confident that the behavior is consistent between the analyzer and front_end if I can test each case once in tests/language, rather than have to duplicate each test case using the analyzer and CFE's individual testing frameworks (which are structured so differently that it would be a nightmare to keep them in sync).

Given that the purpose of the feature is to help the user understand a subtle part of the language that often confuses people, I think it's especially imprortant for the CFE and analyzer beavhior to be consistent with each other and to avoid mistakes. So in terms of priority, I think we should regard the ability to test it in a uniform way using test.py to be a "must have" part of the feature.

@devoncarew
Copy link
Member

OK, thanks, that info helps. It sounds like this would improve the test coverage of the feature and reduce the cost of implementing it.

I'm asking because dart analyze is not currently run by test.py; dartanalyzer is. Plumbing dart analyze into test.py might be as expensive as getting it to understand the dartanalyzer's batch mode, or, swizzling test.py into some other method of efficient batch testing.

We'd probably want to explore that soon in order to figure out the overall cost (or, decide to backport the json format to the dartanalyzer command line tool, which I'd generally like to avoid). cc @munificent

Potentially, we could just extend the batch mode format to include context messages. I suspect (but don't know) it's just used by test.py.

@devoncarew
Copy link
Member

Here's the current format of batch mode: SEVERITY|TYPE|ERROR_CODE|FILE_PATH|LINE|COLUMN|LENGTH|ERROR_MESSAGE.

@bwilkerson
Copy link
Member

Yep. That's the "machine" format, and isn't very easily extensible. Changes to it might break CI systems, though I don't know that for sure.

@devoncarew
Copy link
Member

At at fist pass we might see how happy test.py is w/ just appending the context message onto the end of that format.

@bwilkerson
Copy link
Member

There are a couple of implication of appending the context messages:

  • We would no longer be able to choose whether or not to test for context messages on any given test, which is a capability I'd prefer to have.
  • All of the clients using the 'machine' format would see these changed messages, and I know that some of those clients have, at least in the past, scraped this output to post-process the diagnostics. Hopefully they're not using the messages, but if they are this might break them.

But if we do end up appending them we'll need to make sure that the order of the context messages is stable.

@bwilkerson
Copy link
Member

I created a separate issue to discuss the actual format of the output: #45068.

@stereotype441 stereotype441 self-assigned this Mar 31, 2021
dart-bot pushed a commit that referenced this issue Apr 1, 2021
These tests just cover the CFE functionality.  Once
#44905 is addressed, I'll add
test expectations for the analyzer.

Note: these tests uncovered two unreelated bugs: #45551 and #45552.

Bug: #44897
Change-Id: I465b157afc2cc15bcc5ce083aaa74614313a38a6
Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/193581
Commit-Queue: Paul Berry <paulberry@google.com>
Reviewed-by: Leaf Petersen <leafp@google.com>
Reviewed-by: Bob Nystrom <rnystrom@google.com>
dart-bot pushed a commit that referenced this issue Apr 5, 2021
This allows the test runner to support context messages, for example
the context error messages associated with the new "why not promoted"
feature.

Bug: #44905
Change-Id: Ie342bde21c43641eafc5d117f328e9fde23c49bc
Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/193740
Reviewed-by: Bob Nystrom <rnystrom@google.com>
Commit-Queue: Paul Berry <paulberry@google.com>
@stereotype441
Copy link
Member Author

Fixed by c82c2f9.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-analyzer Use area-analyzer for Dart analyzer issues, including the analysis server and code completion. P2 A bug or feature request we're likely to work on
Projects
None yet
Development

No branches or pull requests

4 participants