Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Partially revert dep changes in #5651 #5711

Merged
merged 1 commit into from Jul 13, 2018

Conversation

alexcrichton
Copy link
Member

@alexcrichton alexcrichton commented Jul 11, 2018

Some logic which was tweaked around the dependencies of build script targets was
tweaked slightly in a way that causes cargo to stack overflow by accientally
adding a dependency loop. This commit implements one of the strategies discussed
in #5711 to fix this situation.

The problem here is that when calculating the deps of a build script we need the
build scripts of other packages, but the exact profile is somewhat difficult
to guess at the moment we're generating our build script unit. To solve this the
dependencies towards other build scripts' executions is added in a different
pass after all other units have been assembled. At this point we should know for
sure that all build script executions are in the dependency graph, and we just
need to add a few more edges.

Closes #5708

@rust-highfive
Copy link

r? @matklad

(rust_highfive has picked a reviewer for you, use r? to override)

@alexcrichton
Copy link
Member Author

r? @ehuss

The bug here was that a RunCustomBuild ended up depending on the same RunCustomBuild, causing a cycle which led to issues. I think you might know why though this originally changed! (I'm not entirely sure)

@ehuss
Copy link
Contributor

ehuss commented Jul 11, 2018

There was an obscure issue where this would poison the deps map with incorrect entries where panic gets set incorrectly when running cargo test --release. This is because it sets ProfileFor::Any which is wrong for tests. I probably should have written a test for that. Generally, setting panic incorrectly on a dependency doesn't cause any problems, though.

I need to head out for a bit, but I should be back in a couple hours. When I get back I'll try to write a test to explain what I mean.

@alexcrichton
Copy link
Member Author

Ah ok no worries! If you're unsuccessful in a test I'll try to dig deeper as well

@ehuss
Copy link
Contributor

ehuss commented Jul 11, 2018

If you look at the test panic_abort_with_build_scripts, if you build it with test --release -v you can see that a-lib gets built twice (once with panic and once without), when it should only be built once (without panic). The main problem is that ProfileFor should not be Any in that case.

I've always thought this method of creating a "tmp" unit to find dependencies to be a little strange. If it just so happens to select a "test" target, then you end up with the same problem of infinite recursion. It's just lucky that the "lib" target is usually first.

I've been trying to think of some different solutions. Here's a few different ideas:

  • I'm wondering about using a different strategy of just calling resolve.deps() directly inside of compute_deps_custom_build. It might need to duplicate some of the filtering logic (like checking optional deps), so that might not be great.
  • Use a clone of deps when looking up the tmp unit to avoid the ProfileFor causing problems. I don't think it is realistic to try to set ProfileFor correctly, since there is a lot of logic to figure that out. If we stick with the strategy of using a "tmp" unit, it should probably be more picky about which target it uses.
  • The following alternate solution passes all the tests, but I don't think it is the right thing to do. In particular, the deps of some random target may be different than the lib target (like a doc unit). Offhand I don't know how it might fail, though.
--- a/src/cargo/core/compiler/context/unit_dependencies.rs
+++ b/src/cargo/core/compiler/context/unit_dependencies.rs
@@ -205,11 +205,11 @@ fn compute_deps_custom_build<'a, 'cfg>(
         .expect("can't find package deps")
         .1;
     Ok(deps.iter()
-        .filter_map(|unit| {
-            if !unit.target.linkable() || unit.pkg.manifest().links().is_none() {
+        .filter_map(|dep_unit| {
+            if dep_unit.pkg == unit.pkg || !dep_unit.target.linkable() || dep_unit.pkg.manifest().links().is_none() {
                 return None;
             }
-            dep_build_script(unit, bcx)
+            dep_build_script(dep_unit, bcx)
         })
         .chain(Some((
             new_unit(
  • Maybe defer computing custom build deps until all other deps are computed? That way we could reliably get a good entry from the deps map.

I'm going to keep thinking about this.

@ehuss
Copy link
Contributor

ehuss commented Jul 11, 2018

I'm thinking it would be best to go with this change now (and on beta!), and then look at a different approach for fixing the panic issue later. I'm leaning towards using resolve.deps() directly, but that would need some care and might have other issues. Does that sound reasonable @alexcrichton?

@ehuss
Copy link
Contributor

ehuss commented Jul 12, 2018

I thought of yet another possible solution: ehuss@df8118d
The idea is to call compute_deps() directly since we're only interested in the immediate dependencies, and that won't perturb the deps map for this temporary lookup. That might result in a slightly amount of extra work done, but I don't think it should be much.

@alexcrichton
Copy link
Member Author

Hey sorry was a bit swamped today and wasn't able to get around to this. Thanks for diagnoising a failing test for this, I think that probably means we shouldn't merge this as-is!

One thing I'm a little uncomfortable about with the current construction is that it relies on the current state of the deps map which we're also filling in at the same time. Similar to how it's bad to rely on the lib target coming first (like it did before) I think it's probably bad to rely on filling in the right order to. Bummer that we keep hitting bugs no matter how they're constructed!

The idea of a postprocessing pass to draw more dependency edges I think may work though. One reason that's not done is that in drawing an edge it may futher generate more work as well. Instead though for build scripts we're basically just drawing edges from an existing build script execution to other build script executions. That, I think, should definitely be possible to do after-the-fact.

I would also prefer to avoid using compute_deps to duplicate work here. We ran into a lot of issues with that historically where it can quickly generate O(n^2) (or maybe O(2^n), I forget) work that Cargo is doing b/c it keeps recursing fully down the tree each time. That may not be true in this particular case, but it's something to watch out for!

I'll look into making this a postprocessing step tomorrow, although if you've got some spare time and want to give a crack at it feel free!

@alexcrichton
Copy link
Member Author

Ok! I've implemented the postprocessing strategy, re-r? @ehuss

@bors: delegate=ehuss

@bors
Copy link
Collaborator

bors commented Jul 13, 2018

✌️ @ehuss can now approve this pull request

@ehuss
Copy link
Contributor

ehuss commented Jul 13, 2018

This is excellent, much cleaner!

Would you mind adding a test to ensure that panic is not set in test --release for this scenario? I've been using the following to test, or maybe add a new test that does something similar?

diff --git a/tests/testsuite/build_script.rs b/tests/testsuite/build_script.rs
index 17541233..039a8c52 100644
--- a/tests/testsuite/build_script.rs
+++ b/tests/testsuite/build_script.rs
@@ -3080,9 +3080,12 @@ fn panic_abort_with_build_scripts() {
         execs().with_status(0),
     );

+    p.root().join("target").rm_rf();
+
     assert_that(
-        p.cargo("test --release"),
+        p.cargo("test --release -v"),
         execs().with_status(0)
+        .with_stderr_does_not_contain("[..]panic[..]")
     );
 }

@alexcrichton
Copy link
Member Author

Certainly, done!

@bors

This comment has been minimized.

Some logic which was tweaked around the dependencies of build script targets was
tweaked slightly in a way that causes cargo to stack overflow by accientally
adding a dependency loop. This commit implements one of the strategies discussed
in rust-lang#5711 to fix this situation.

The problem here is that when calculating the deps of a build script we need the
build scripts of *other* packages, but the exact profile is somewhat difficult
to guess at the moment we're generating our build script unit. To solve this the
dependencies towards other build scripts' executions is added in a different
pass after all other units have been assembled. At this point we should know for
sure that all build script executions are in the dependency graph, and we just
need to add a few more edges.

Closes rust-lang#5708
@alexcrichton
Copy link
Member Author

(rebased)

@ehuss
Copy link
Contributor

ehuss commented Jul 13, 2018

@bors: r+

Thanks for doing this! Do you want to backport the same change on beta, or use the original revert change?

@bors
Copy link
Collaborator

bors commented Jul 13, 2018

📌 Commit f9d4927 has been approved by ehuss

@bors
Copy link
Collaborator

bors commented Jul 13, 2018

⌛ Testing commit f9d4927 with merge 5cddc4b...

bors added a commit that referenced this pull request Jul 13, 2018
Partially revert dep changes in #5651

Some logic which was tweaked around the dependencies of build script targets was
tweaked slightly in a way that causes cargo to stack overflow by accientally
adding a dependency loop. This commit implements one of the strategies discussed
in #5711 to fix this situation.

The problem here is that when calculating the deps of a build script we need the
build scripts of *other* packages, but the exact profile is somewhat difficult
to guess at the moment we're generating our build script unit. To solve this the
dependencies towards other build scripts' executions is added in a different
pass after all other units have been assembled. At this point we should know for
sure that all build script executions are in the dependency graph, and we just
need to add a few more edges.

Closes #5708
@alexcrichton
Copy link
Member Author

I think we probably want to backport this patch to beta as otherwise we'd just have the same bug you fixed already on beta, and this method hopefully solves both!

@bors
Copy link
Collaborator

bors commented Jul 13, 2018

☀️ Test successful - status-appveyor, status-travis
Approved by: ehuss
Pushing 5cddc4b to master...

@bors bors merged commit f9d4927 into rust-lang:master Jul 13, 2018
alexcrichton added a commit to alexcrichton/cargo that referenced this pull request Jul 13, 2018
Some logic which was tweaked around the dependencies of build script targets was
tweaked slightly in a way that causes cargo to stack overflow by accientally
adding a dependency loop. This commit implements one of the strategies discussed
in rust-lang#5711 to fix this situation.

The problem here is that when calculating the deps of a build script we need the
build scripts of *other* packages, but the exact profile is somewhat difficult
to guess at the moment we're generating our build script unit. To solve this the
dependencies towards other build scripts' executions is added in a different
pass after all other units have been assembled. At this point we should know for
sure that all build script executions are in the dependency graph, and we just
need to add a few more edges.

Closes rust-lang#5708
bors added a commit that referenced this pull request Jul 13, 2018
[beta] Partially revert dep changes in #5651

This is a beta backport of #5711
@alexcrichton alexcrichton deleted the fix-stack-overflow branch July 18, 2018 15:54
@ehuss ehuss modified the milestones: 1.29.0, 1.28.0 Feb 6, 2022
bors added a commit that referenced this pull request Feb 21, 2023
Cleanup tests

When working on #11738 I noticed a few tests that needed to be cleaned up. It [was suggested](#11738 (comment)) to split the test changes into their own PR.

- `bad_config::bad1`
  - This was trying to match `config` which would get matched in the wrong place due to the test name having `config` in it
  - Fixed by making the match `/config`
- `build_script::panic_abort_with_build_scripts`
  - This test was failing as it was making sure `panic` was not in the output. After this change, it could match the test name and fail.
  - To fix this I updated it to look at `panic=abort` which appears to be what it was originally looking for (#5711). `@ehuss` please let me know if I misread what the test was testing.
- `cargo_command::cargo_subcommand_args`
  - This test had `#[test]` above `#[cargo_test]` which caused it to throw an error removing it fixed the issue

During this time I also ran into issues with `cargo_remove` tests being named `fn case()` which is different from how tests are normally named. I talked with `@epage` and it was decided that `fn case()` should probably be used for all `snapbox` tests since the unique test name is already the folder name. I went ahead and renamed all tests within `cargo_add/` and `init/` to match this style.

This PR should be reviewed commit by commit.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

stack overflow when compiling jemallocator
5 participants