Conversation
src/main/java/com/google/devtools/build/lib/bazel/bzlmod/ModuleFileGlobals.java
Outdated
Show resolved
Hide resolved
src/test/java/com/google/devtools/build/lib/bazel/bzlmod/BazelModuleResolutionFunctionTest.java
Show resolved
Hide resolved
|
The implementation looks good to me, but I wonder whether we could further minimize the impact of nodeps (assuming that there will be many, see boost). As it is, Bazel would still need to download and parse the module files of all nodeps modules some version of which is contained in the dep graph. If we didn't have compatibility_level, we wouldn't have to fetch nodeps whose version compares less or equal to a module version that's already in the dep graph, knowing that they couldn't possibly become relevant. Maybe we can relax the handling of nodeps somewhat to still get this behavior? For example, we could specify that A with a nodep on B 1.0 (compatibility level 1) and B with a regular dep on B 2.0 (compatibility level 2) would not result in a build failure, but rather select B 2.0. I am bringing this up now since changing it later would be a breaking change. |
| } | ||
|
|
||
| @Test | ||
| public void nodep_crossesCompatLevelBoundary() throws Exception { |
There was a problem hiding this comment.
Could you add a test for a fulfilled no-dep that crosses a compat level boundary (and thus causes the build to fail)?
There was a problem hiding this comment.
maybe I'm misunderstanding you, but a nodep edge crossing compat level boundaries is ignored (like this test asserts) unless there's a yes-dep edge cross that compat level boundary too. But that would be a moot test because there would be an error with or without the nodep edge.
There was a problem hiding this comment.
Sorry for the confusion, it took me some time to figure out how to specify the behavior I would like to see. It's not directly related to compatibility level crossing, but its implementation may constrain how we handle that.
I would like to see the following test pass (when pasted into DiscoveryTest.java). Crucically, ddd@1.0 never has its module file downloaded and also doesn't show up in the dep graph. This would make nodeps even less costly to add.
If we can't get this behavior that's also fine, I would just like to make sure we have tried at a point where we won't need a backwards incompatible change.
@Test
public void testNodep_ineffectiveUpdate() throws Exception {
scratch.file(
workspaceRoot.getRelative("MODULE.bazel").getPathString(),
"""
module(name='aaa',version='0.1')
bazel_dep(name='bbb',version='1.0')
bazel_dep(name='ccc',version='1.0')
""");
FakeRegistry registry =
registryFactory
.newFakeRegistry("/foo")
.addModule(
createModuleKey("bbb", "1.0"),
"module(name='bbb', version='1.0');bazel_dep(name='ddd', version='2.0')")
.addModule(
createModuleKey("ccc", "1.0"),
"module(name='ccc', version='1.0');bazel_dep(name='ddd',version='1.0',repo_name=None)")
.addModule(createModuleKey("ddd", "1.0"), "module(name='ddd', version='1.0')")
.addModule(createModuleKey("ddd", "2.0"), "module(name='ddd', version='2.0')");
ModuleFileFunction.REGISTRIES.set(differencer, ImmutableSet.of(registry.getUrl()));
EvaluationResult<DiscoveryValue> result =
evaluator.evaluate(ImmutableList.of(DiscoveryValue.KEY), evaluationContext);
if (result.hasError()) {
fail(result.getError().toString());
}
DiscoveryValue discoveryValue = result.get(DiscoveryValue.KEY);
assertThat(discoveryValue.depGraph().entrySet())
.containsExactly(
InterimModuleBuilder.create("aaa", "0.1")
.setKey(ModuleKey.ROOT)
.addDep("bbb", createModuleKey("bbb", "1.0"))
.addDep("ccc", createModuleKey("ccc", "1.0"))
.buildEntry(),
InterimModuleBuilder.create("bbb", "1.0")
.addDep("ddd", createModuleKey("ddd", "2.0"))
.setRegistry(registry)
.buildEntry(),
InterimModuleBuilder.create("ccc", "1.0")
.setRegistry(registry)
.addNodepDep(createModuleKey("ddd", "1.0"))
.buildEntry(),
InterimModuleBuilder.create("ddd", "2.0").setRegistry(registry).buildEntry());
assertThat(discoveryValue.registryFileHashes().keySet())
.containsExactly(
registry.getUrl() + "/modules/bbb/1.0/MODULE.bazel",
registry.getUrl() + "/modules/ccc/1.0/MODULE.bazel",
registry.getUrl() + "/modules/ddd/2.0/MODULE.bazel");
}There was a problem hiding this comment.
I have reservations about such a test case because
- it's technically unsafe to assert that a lower version cannot possibly affect the dependency graph, since a lower version can have a higher version of a dep. (See my other comment.)
- yes-dep edges have the same behavior -- if the
ccc@1.0->ddd@1.0edge in your test case were a normal bazel_dep, we would also not skip that edge.
There was a problem hiding this comment.
- Isn't that only unsafe in the sense that changing this behavior later would be a breaking change? I don't think that taking the versions of deps of "ineffective" nodeps into account is required for the resolution result to be considered reasonable.
- That's correct and I certainly don't want to change that. But for both a "nodep" dep or a "lower version constraint" (whatever we brand this new feature as), it's potentially wasteful.
I don't consider making nodeps behave like deps in this sense a big problem, but it could come back to hurt us later. Nodeps will mostly be used in N:1 dep situations with somewhat large (protobuf) or really large (boost) N. In many cases, users at a slightly larger scale will use many of these N modules and likely have them updated to latest anyway, which would mean that nodeps as is would roughly double the number of module files visited while not being relevant for the resolution outcome most of the time. Gradual performance degradation is what worries me most here as it is 1) hard to attribute to nodeps and 2) hard to fix retroactively as per your point 1. I may be arguing for premature optimization here, but it's unfortunately not just an internal algorithm :-)
That's already the case IIUC -- the A->B1.0 edge is not examined during selection, so we never notice the existence of B1.0. (It's the same as the A-regular->B1.0 and B-nodep->B2.0 case) |
|
I see. In that case, how about making it so that we don't even fetch the module file of B1.0 during discovery? In any case, could you add assertions on the registry files fetched during discovery to the unit tests? |
|
I'm more inclined to leave adding that behavior to a later PR, or even skip it. The following two claims seem a bit contradictory to me:
If the older versions cannot possibly become relevant, why would dropping them be a breaking change? :) It seems to me that the older versions can technically become relevant, because newer versions could downgrade dependencies. It's a different question whether that's desirable, but then again, it seems that this is not a nodep-specific question (normal bazel_dep edges can also point to lower versions of deps). |
605ff15 to
cf1b6bf
Compare
|
Edit: I suggested a test case that shows what I would like to see and getting it to pass later may require a "breaking change". |
src/test/java/com/google/devtools/build/lib/bazel/bzlmod/DiscoveryTest.java
Outdated
Show resolved
Hide resolved
PiperOrigin-RevId: 726734397 Change-Id: I1ca7a7a1228da5e4c2e4b5d6983a2ec97b31a4b8
This PR implements the "nodep" edges from https://docs.google.com/document/d/1JsfbH9kdMe3dyOY-IR8SUakS541A7OM8pQcKpxTRMRs/edit?tab=t.0, using the syntax of
bazel_dep(..., repo_name=None). The behavior is that these edges are "unfulfilled" unless the module they refer to already exist in the dep graph by some other means.Most of the changes are in the Discovery class -- I reorganized the code in there to hopefully help with readability, given the new multi-round discovery logic.
Also changed
ModuleFileValue.Keyto no longer take the applicable override next to the module key --ModuleFileFunctionnow gets the root module from Skyframe itself and looks up the correct override. The old setup was always weird (what does it mean to requestfoo@1.0with an incorrect override??).Beyond that, I changed
ModuleFileValueimplementations to become records. Just for the heck of it.Work towards #25214