-
-
Notifications
You must be signed in to change notification settings - Fork 628
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor filtering of valid targets via the engine #9614
Refactor filtering of valid targets via the engine #9614
Conversation
# Delete this line to force CI to run Clippy and the Rust tests. [ci skip-rust-tests] # Delete this line to force CI to run the JVM tests. [ci skip-jvm-tests]
# Delete this line to force CI to run Clippy and the Rust tests. [ci skip-rust-tests] # Delete this line to force CI to run the JVM tests. [ci skip-jvm-tests]
bulleted_list_sep = "\n * " | ||
|
||
if len(targets_to_valid_configs) > 1: | ||
binary_target_addresses = sorted(tgt.address.spec for tgt in targets_to_valid_configs) | ||
raise ValueError( | ||
f"The `run` goal only works on one binary target but was given multiple targets that " | ||
f"can produce a binary:" | ||
f"{bulleted_list_sep}{bulleted_list_sep.join(binary_target_addresses)}\n\n" | ||
f"Please select one of these targets to run." | ||
) | ||
|
||
target, valid_configs = list(targets_to_valid_configs.items())[0] | ||
if len(valid_configs) > 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All factored out to now be expect_single_config=True
:)
exit_code, stdout = self.run_test_rule( | ||
config=SuccessfulConfiguration, targets=[self.make_target_with_origin()], debug=True, | ||
) | ||
assert exit_code == 0 | ||
|
||
def test_multiple_debug_targets_fail(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic no longer lives in test.py
. It's now in engine/target.py
. It would be meaningless to include the test here because this is a run_rule
-style test, so we would need to mock out the behavior of await Get[TargetsToValidConfigurations]
and wouldn't actually test anything meaningful to test.py
.
{tgt_with_origin: tuple(configs) for tgt_with_origin, configs in mapping.items()} | ||
) | ||
|
||
@memoized_property |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense to memoize this? The idea is to avoid having to re-iterate over the collection multiple times.
def __init__( | ||
self, | ||
targets_with_origins: TargetsWithOrigins, | ||
*, | ||
valid_target_types: Iterable[Type[Target]], | ||
goal_name: str, | ||
goal_description: str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change was necessary so that we can say test --debug
in an error message, instead of the 'test' goal
.
This builds on top of #9605 to better support the pattern of a
@goal_rule
filtering out irrelevant targets and converting them into its subclass' configurations.Note that this pattern does not apply to every
@goal_rule
. Many@goal_rule
s are simpler and have no downstream rules - they can avoid configurations entirely, likelist_targets.py
. Others are more complex, likelint.py
andfmt.py
needing to batch Configurations, rather than working on a per-target basis.