Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor filtering of valid targets via the engine #9614

Merged
merged 2 commits into from
Apr 23, 2020

Conversation

Eric-Arellano
Copy link
Contributor

This builds on top of #9605 to better support the pattern of a @goal_rule filtering out irrelevant targets and converting them into its subclass' configurations.

Note that this pattern does not apply to every @goal_rule. Many @goal_rules are simpler and have no downstream rules - they can avoid configurations entirely, like list_targets.py. Others are more complex, like lint.py and fmt.py needing to batch Configurations, rather than working on a per-target basis.

# Delete this line to force CI to run Clippy and the Rust tests.
[ci skip-rust-tests]
# Delete this line to force CI to run the JVM tests.
[ci skip-jvm-tests]
# Delete this line to force CI to run Clippy and the Rust tests.
[ci skip-rust-tests]
# Delete this line to force CI to run the JVM tests.
[ci skip-jvm-tests]
Comment on lines -66 to -78
bulleted_list_sep = "\n * "

if len(targets_to_valid_configs) > 1:
binary_target_addresses = sorted(tgt.address.spec for tgt in targets_to_valid_configs)
raise ValueError(
f"The `run` goal only works on one binary target but was given multiple targets that "
f"can produce a binary:"
f"{bulleted_list_sep}{bulleted_list_sep.join(binary_target_addresses)}\n\n"
f"Please select one of these targets to run."
)

target, valid_configs = list(targets_to_valid_configs.items())[0]
if len(valid_configs) > 1:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All factored out to now be expect_single_config=True :)

exit_code, stdout = self.run_test_rule(
config=SuccessfulConfiguration, targets=[self.make_target_with_origin()], debug=True,
)
assert exit_code == 0

def test_multiple_debug_targets_fail(self) -> None:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic no longer lives in test.py. It's now in engine/target.py. It would be meaningless to include the test here because this is a run_rule-style test, so we would need to mock out the behavior of await Get[TargetsToValidConfigurations] and wouldn't actually test anything meaningful to test.py.

{tgt_with_origin: tuple(configs) for tgt_with_origin, configs in mapping.items()}
)

@memoized_property
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense to memoize this? The idea is to avoid having to re-iterate over the collection multiple times.

def __init__(
self,
targets_with_origins: TargetsWithOrigins,
*,
valid_target_types: Iterable[Type[Target]],
goal_name: str,
goal_description: str,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change was necessary so that we can say test --debug in an error message, instead of the 'test' goal.

@Eric-Arellano Eric-Arellano merged commit 58ef25f into pantsbuild:master Apr 23, 2020
@Eric-Arellano Eric-Arellano deleted the dry-configs-via-engine branch April 23, 2020 00:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants