Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve in-build Starlark Testing #6237

Closed
c-parsons opened this issue Sep 25, 2018 · 3 comments
Closed

Improve in-build Starlark Testing #6237

c-parsons opened this issue Sep 25, 2018 · 3 comments
Assignees

Comments

@c-parsons
Copy link
Contributor

This is a tracking issue for work proposed in Improving In-Build Starlark Testing
.

@c-parsons c-parsons self-assigned this Sep 25, 2018
bazel-io pushed a commit that referenced this issue Oct 1, 2018
This new object is tied to a new semantic flag, --experimental_analysis_testing_improvements

Progress toward #6237.

RELNOTES: None.
PiperOrigin-RevId: 215265415
bazel-io pushed a commit that referenced this issue Oct 3, 2018
The new object is tied to --experimental_analysis_testing_improvements
This object will eventually be responsible for signaling to Bazel that a test success/failure action should be created on behalf of the current target. This change, however, only exposes this object as a new provider.

Progress toward #6237.

RELNOTES: None.
PiperOrigin-RevId: 215601605
bazel-io pushed a commit that referenced this issue Oct 4, 2018
…d of failing a build

This new functionality is tied to --experimental_allow_analysis_failures : This feature is designed to facilitate in-build (analysis-phase) testing rules.

Progress toward #6237.

RELNOTES: None.
PiperOrigin-RevId: 215820356
bazel-io pushed a commit that referenced this issue Oct 17, 2018
This will control whether a given rule should be treated as an "analysis test" rule. The parameter and its functionality will only be available via --experimental_analysis_testing_improvements until it is complete.

In this change, analysis_test = True enforces the restriction that the rule implementation function for that rule may not register actions.

See https://docs.google.com/document/d/17P2sgC6VPmcA7CcqC2p4cfU2Kaj6dwKiQfIXhtXTzl4/ for
details.

Progress toward #6237

RELNOTES: None.
PiperOrigin-RevId: 217562194
bazel-io pushed a commit that referenced this issue Oct 18, 2018
This change enforces that for_analysis_test transitions occur only on attributes of rules with analysis_test=True. This restriction is separate from the whitelist restriction of non-analysis-test transitions.

Progress on #5574 and #6237

RELNOTES: None.
PiperOrigin-RevId: 217782561
bazel-io pushed a commit that referenced this issue Oct 23, 2018
The generated script will result in test pass/failure based on the info object returned by the implementation function.

Progress toward #6237

RELNOTES: None.
PiperOrigin-RevId: 218424616
bazel-io pushed a commit that referenced this issue Oct 25, 2018
…h //command_line_option.

By enforcing label-like syntax on these options, this will make it easier to migrate these functions to use actual labels when this is implemented.

Progress toward #5574 and #6237

RELNOTES: None.
PiperOrigin-RevId: 218734648
bazel-io pushed a commit that referenced this issue Oct 29, 2018
… analysis-test transitions.

Creating a transition with for_analysis_testing=True is still guarded under --experimental_analysis_testing_improvements, but after this change, it will not require *also* --experimental_starlark_config_transitions.

Progress toward #6237

RELNOTES: None.
PiperOrigin-RevId: 219142313
bazel-io pushed a commit that referenced this issue Nov 5, 2018
inputs affect the settings parameter to the transition implementation function; only declared inputs are included in that map.
outputs constrain the settings which may be returned by the transition implementation function; the returned setting keys must exactly match the declared outputs.

Progress toward #5574 and #6237.

RELNOTES: None.
PiperOrigin-RevId: 220111780
bazel-io pushed a commit that referenced this issue Nov 5, 2018
…h attributes which use for_analysis_testing transitions.

Any cases exceeding this limit will result in a rule error being thrown on the analysis_test rule.

The limit is configurable via new flag --analysis_testing_deps_limit with a default limit of 500. The feature itself is still guarded behind --experimental_analysis_testing_improvements, so the former flag has no non-experimental meaning.

Progress toward #5574 and #6237.

RELNOTES: None.
PiperOrigin-RevId: 220144957
bazel-io pushed a commit that referenced this issue Nov 9, 2018
…parameter of transition().

This is implementation of recent design changes to analysis-test transitions. See the design document linked to in #6237

Progress toward #6237

RELNOTES: None.
PiperOrigin-RevId: 220845699
bazel-io pushed a commit that referenced this issue Nov 19, 2018
* Removes --experimental_analysis_testing_improvements flag, moving functionality previously guarded under this flag to be usable without the flag.
* Renames --experimental_allow_analysis_failures to --allow_analysis_failures

This change is a "silent launch" of these features. Documentation on these features soon to follow: after release, bazel-skylib will be updated to incorporate a number of these improvements, and announcements with thorough documentation will be made then.

Progress toward #6237

RELNOTES: None.
PiperOrigin-RevId: 222106622
@c-parsons
Copy link
Contributor Author

This is mostly done, but there are a couple of items that need fixing, so I will leave this open to track:

  1. The analysis_test_deps_limit limit should only apply to analysis test rules which use an analysis test transition. They currently apply to all analysis_test.
  2. In lenient-analysis-failures mode, targets which fail (and thus propagate AnalysisFailureInfo) should result in all dependers also fast-failing with AnalysisFailureInfo. This was the proposal in the design, but it is currently not so.

bazel-io pushed a commit that referenced this issue Feb 20, 2019
Under --allow_analysis_failures, if a rule has any dependencies that propagate AnalysisFailureInfo, then the rule itself is almost certain to fail, and thus will re-propagate AnalysisFailureInfo automatically containing the specific failures of its dependencies, instead of an AnalysisFailureInfo that is due to some more-obscure exception.

Progress toward #6237. This implements a planned portion of the design that was previously forgotten.

This is technically a breaking change, but should be very low-risk. (Previously, it would be possible to depend on a failing target, not use any of its data, and therefore not fail. After this change, this is no longer possible.)

RELNOTES: None.
PiperOrigin-RevId: 234864320
bazel-io pushed a commit that referenced this issue Feb 25, 2019
Previously, a transitive-dependency count limit was applied to all analysis_test rules for simplicity, since there was not deemed a use case for an analysis test that was "large". However, relaxing this restriction to only apply to rules using transitions facilitates a build_test-like Starlark rule which verifies an existing real target analyzes correctly without executing any actions.

Progress toward #6237

RELNOTES: None.
PiperOrigin-RevId: 235580276
@katre
Copy link
Member

katre commented May 13, 2020

@c-parsons Is there any remaining work here?

@c-parsons
Copy link
Contributor Author

I believe we can close this, and track any issues with this framework in different bugs :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants