Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check subcommand should force rebuild #6664

Open
wants to merge 7 commits into
base: master
from

Conversation

@JoshMcguigan
Copy link

JoshMcguigan commented Feb 14, 2019

This builds on the work from #5944, and pushed forward rust-lang/rust-clippy#2604. If ensures cargo check always does a rebuild.

The one thing I'm not sure of is whether this should be optional behind a command line flag? My initial thoughts are that most users would expect by default that cargo check always worked in this way (rebuilding and showing warnings every time it is ran), but that users running cargo check on a file system watcher might have some issues.

@rust-highfive

This comment has been minimized.

Copy link

rust-highfive commented Feb 14, 2019

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @alexcrichton (or someone else) soon.

If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes.

Please see the contribution instructions for more information.

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 14, 2019

It looks like three tests failed, but only on nightly. I'm going to look into this, but any ideas of what could be the cause here?

profile_targets::profile_selection_check_all_targets
profile_targets::profile_selection_check_all_targets_release
profile_targets::profile_selection_check_all_targets_test

-- edit --

I see now that all three of those tests have a code block which returns early if not running nightly. So I suspect these tests would have failed for all versions of Rust if not for that. I'll work on fixing them up. As a side note, they say they should be removed once 1.27 is stable. Should we just remove these early returns now?

if !is_nightly() {
    // This can be removed once 1.27 is stable, see below.
    return;
}
@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 14, 2019

The latest commit should fix these tests, assuming we want to remove the early return for !nightly(), and also assuming we want the default behavior of cargo check to force a rebuild. Some of this would need to be reverted if the new cargo check behavior goes behind a command line option.

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 14, 2019

most users would expect by default that cargo check always worked in this way (rebuilding and showing warnings every time it is ran)

I don't think most users expect cargo check to rebuild, quite the opposite I think they expect it to be fast and avoid rebuilding. They might want to see warnings every time (I'm with them) but, again, I don't think they expect it, given how long cargo check has existed by now.

If we agree cargo check should redisplay warnings then we should find a way to cache and redisplay the warnings every time. If not this should be behind a flag. (--rebuild?)

@BatmanAoD

This comment has been minimized.

Copy link

BatmanAoD commented Feb 14, 2019

@dwijnand I was pretty surprised to discover that cargo check doesn't re-display errors. But since it doesn't "build" in the first place, I wouldn't expect it to "rebuild" either way!

As long as this PR doesn't change the behavior of cargo build (i.e. in cargo build; cargo check; cargo build, the second build would still be a no-op), then I don't understand why this change would be surprising to users.

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 14, 2019

As long as this PR doesn't change the behavior of cargo build (i.e. in cargo build; cargo check; cargo build, the second build would still be a no-op), then I don't understand why this change would be surprising to users.

It's surprising to users not because of any interactions with cargo build but because now every time they ran cargo check their entire crate would recompiled, making cargo check worst than cargo build, given a big enough crate.

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 15, 2019

If we agree cargo check should redisplay warnings then we should find a way to cache and redisplay the warnings every time.

Yes, I should have said users expect to see the error messages each time, so I think cacheing these warnings would be great. Any tips on implementing this cacheing? I'm new to the cargo codebase so I'm not sure how cacheing is typically handled.

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 15, 2019

I don't know exactly but cargo process calls rustc, and does so via a rustc wrapper, so finding that will get you half way to before it executes it.

Then you need to go to the place where it uses the fact that the build artefacts are "fresh" to avoid compile to display the warnings you cached.

If I can find more specific code locations I'll recomment.

@sanmai-NL

This comment has been minimized.

Copy link

sanmai-NL commented Feb 15, 2019

@dwijnand: cargo check shouldn't compile, it should one perform the first few steps in the build pipeline, as documented at https://rust-lang-nursery.github.io/edition-guide/rust-2018/cargo-and-crates-io/cargo-check-for-faster-checking.html. Maybe those first steps take a significant amount of time, but insofar that's unacceptable that'd be an issue in itself.

@sanmai-NL

This comment has been minimized.

Copy link

sanmai-NL commented Feb 15, 2019

@BatmanAoD:

cargo build; cargo check; cargo build

should behave differently, cargo check shouldn't reuse cargo build results at all. That's why it's called 'check': the main use case for the check-subcommand is to check e.g. syntax again, irrespective of whether some build succeeded earlier and whether the source code/build config has changed in the meantime.

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 15, 2019

@dwijnand: cargo check shouldn't compile, it should one perform the first few steps in the build pipeline, as documented at rust-lang-nursery.github.io/edition-guide/rust-2018/cargo-and-crates-io/cargo-check-for-faster-checking.html. Maybe those first steps take a significant amount of time, but insofar that's unacceptable that'd be an issue in itself.

Right, I said "compile" as short-hand, but you're right it stops earlier in the pipeline, running (what's commonly referred to as) the frontend of the compiler.

@BatmanAoD:

cargo build; cargo check; cargo build

should behave differently, cargo check shouldn't reuse cargo build results at all. That's why it's called 'check': the main use case for the check-subcommand is to check e.g. syntax again, irrespective of whether some build succeeded earlier and whether the source code/build config has changed in the meantime.

cargo check actually doesn't reuse cargo build's results, which is tracked in #3501. I disagree that 'check' shouldn't make use of previous results, but I agree it should redisplay warnings, either by default or behind a flag. (--show-warnings? --show-lints?)

@sanmai-NL

This comment has been minimized.

Copy link

sanmai-NL commented Feb 15, 2019

@dwijnand: I agree it should redisplay warnings then, but by default, since I claim it'd be the expected behavior. Since running the first steps in the compilation pipeline is supposed to be low-cost, the desired default behavior would be easily implemented by not reusing previous build results.

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 15, 2019

Since running the first steps in the compilation pipeline is supposed to be low-cost, the desired default behavior would be easily implemented by not reusing previous build results.

I think it's low-cost compared to a full build, but it would be slow compared to the status quo in which cargo uses freshness checks.

@sanmai-NL

This comment has been minimized.

Copy link

sanmai-NL commented Feb 15, 2019

But it's called 'check', when you run that explicitly, it's reasonable some cost is incurred. To the contrary, it's not reasonable e.g. cargo check doesn't redisplay its messages because of caching.

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 15, 2019

But it's called 'check', when you run that explicitly, it's reasonable some cost is incurred. To the contrary, it's not reasonable e.g. cargo check doesn't redisplay its messages because of caching.

While I can agree with the later, I'd consider it a performance regression if cargo check in a big codebase where no file was touched to take too long.

I'm optimistic we can find a way to cache and uncache warnings without changing the freshness behaviour.

@BatmanAoD

This comment has been minimized.

Copy link

BatmanAoD commented Feb 16, 2019

@JoshMcguigan :

Any tips on implementing this cacheing? I'm new to the cargo codebase so I'm not sure how cacheing is typically handled.

it looks like Eric Huss has some pointers; see the links in his comment here: https://internals.rust-lang.org/t/clippy-behavior-affected-by-incremental-compilation/9436/2?u=batmanaod

@sanmai-NL : Yes, I was asking specifically about the last part of the sequence, the second cargo build. My point was that cargo check should never invalidate the cargo build cache, regardless of whether or not the build cache has any effect on the check behavior. (I don't know if this PR actually would cause the build cache to be invalidated, but it's something I'd want to check to be sure.)

@dwijnand :

cargo check actually doesn't reuse cargo build's results

...hold on, that's surprising to me. If cargo check doesn't reuse the build results, why doesn't it re-analyze files that have already been compiled?

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 16, 2019

It uses its own results not build's results.

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 16, 2019

After further review of the way clippy integrates with cargo (https://github.com/rust-lang/rust-clippy/blob/master/src/main.rs#L103-L111), it looks like clippy actually does need a way to ensure rustc will actually be invoked. This means cacheing the results of a rustc invocation within cargo and re-displaying them does not solve the clippy use case (clippy team, please let me know if I am misunderstanding here).

I propose this PR can add a --force-rebuild CLI flag to cargo check to solve the clippy use-case, and a later PR can implement cacheing/re-displaying warning messages when the user runs something like cargo check; cargo check.

JoshMcguigan added some commits Feb 16, 2019

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 16, 2019

LGTM. The Cargo.toml isn't needed in the test (one is generated automatically). Also, maybe we should add --force-rebuild to cargo build. But I'm happy to land this as is.

I'd like to hear what @alexcrichton or @ehuss think, though, before landing this.

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 16, 2019

The Cargo.toml isn't needed in the test (one is generated automatically)

Thanks, I'll remove this.

Also, maybe we should add --force-rebuild to cargo build

My primary concern is in the name of the arg. It only rebuilds the crate, but not its dependencies. It seems like that may not be intuitive if applied to cargo build, but I'm not sure what the use case would be for cargo build --force-rebuild so I can't say for sure. Along the same lines, perhaps cargo check --force-recheck makes more sense than cargo check --force-rebuild.

Not to muddy the waters here I just want to make sure this is considered.

@Nemo157

This comment has been minimized.

Copy link
Contributor

Nemo157 commented Feb 16, 2019

I just tested this in a complex workspace of mine and there seems to be some bugs in how the force_rebuild flag is actually handled by build config. Probably not a blocker for getting it exposed in the CLI, but I'll try and remember to open a bug ticket in a few days.

There are 11 crates in this workspace, 2 are excluded so there should be 9 being forced to rebuild, but the only ones that I have seen consistently included are embrio-async-dehygiene and local. These two are the only crates that aren't dependencies of other crates in the workspace, so it seems like the logic that excludes dependencies from being forced is also able to exclude packages that were explicitly selected:

> ../check-force-rebuild/target/debug/cargo check --all --exclude embrio-nrf51 --exclude pca10031 --force-rebuild
    Checking embrio-native v0.1.0 (/Users/nemo157/sources/embrio-rs/embrio-native)
    Checking embrio-async-dehygiene v0.1.0 (/Users/nemo157/sources/embrio-rs/embrio-async/dehygiene)
    Checking embrio v0.1.0 (/Users/nemo157/sources/embrio-rs/embrio)
    Checking hello v0.0.0 (/Users/nemo157/sources/embrio-rs/examples/apps/hello)
    Checking local v0.0.0 (/Users/nemo157/sources/embrio-rs/examples/local)
    Finished dev [unoptimized + debuginfo] target(s) in 1.17s

> ../check-force-rebuild/target/debug/cargo check --all --exclude embrio-nrf51 --exclude pca10031 --force-rebuild
    Checking embrio-native v0.1.0 (/Users/nemo157/sources/embrio-rs/embrio-native)
    Checking embrio-async-dehygiene v0.1.0 (/Users/nemo157/sources/embrio-rs/embrio-async/dehygiene)
    Checking local v0.0.0 (/Users/nemo157/sources/embrio-rs/examples/local)
    Finished dev [unoptimized + debuginfo] target(s) in 0.99s
@Nemo157

This comment has been minimized.

Copy link
Contributor

Nemo157 commented Feb 18, 2019

I took a glance through the code and it looks to me that the issue is that if you hit

compile(cx, jobs, plan, unit, exec, false)?;
with a dependency that is also one of the top level targets then because of
if !cx.compiled.insert(*unit) {
return Ok(());
}
it will not notice that that target is meant to be rebuilt (I definitely did not do enough investigation to confirm this, but if the order that the top-level targets have their compile jobs generated in is non-deterministic then this explains the behaviour I was seeing very well). If I'm correct then this looks like an issue that's been present since force_rebuild was added for cargo fix, so IMO would be best fixed as a separate change.


I think your test case is behaving correctly, from the docs on workspace.members:

# Optional key, inferred from path dependencies if not present.
# Additional non-path dependencies that should be included must be given here.

So only b is part of the workspace and it correctly ignores checking c (and it appears --exclude with a package that is not in the workspace doesn't give an error/warning (#6678) for d).

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 19, 2019

@Nemo157 Thanks for the detailed follow up.

The test case below fails, and I believe covers your scenario. It seems like the --exclude was not a factor in the bug.

Feel free to tag me when you make the bug report, as I'd like to attempt solving this one. Or if you'd like I can write up the bug report. That is assuming @dwijnand doesn't think this should be fixed before merging this PR.

// Confirm that force rebuild works correctly with dependencies which are also a target
#[test]
fn check_force_rebuild_with_workspace_dependencies() {
    let p = project()
        .file(
            "Cargo.toml",
            r#"
            [workspace]
            members = ["a", "b", "c", "d"]
        "#,
        )
        .file(
            "a/Cargo.toml",
            r#"
            [package]
            name = "a"
            version = "0.1.0"

            [dependencies]
            b = { path = "../b" }
            c = { path = "../c" }
            d = { path = "../d" }
            e = { path = "../e" }
            f = { path = "../f" }
        "#,
        )
        .file("a/src/lib.rs", "")
        .file("b/Cargo.toml", &basic_manifest("b", "0.0.1"))
        .file("b/src/lib.rs", "")
        .file("c/Cargo.toml", &basic_manifest("c", "0.0.1"))
        .file("c/src/lib.rs", "")
        .file("d/Cargo.toml", &basic_manifest("d", "0.0.1"))
        .file("d/src/lib.rs", "")
        .file("e/Cargo.toml", &basic_manifest("e", "0.0.1"))
        .file("e/src/lib.rs", "")
        .file("f/Cargo.toml", &basic_manifest("f", "0.0.1"))
        .file("f/src/lib.rs", "")
        .build();

    p.cargo("check --all")
        .with_stderr_contains("[CHECKING] a [..]")
        .with_stderr_contains("[CHECKING] b [..]")
        .with_stderr_contains("[CHECKING] c [..]")
        .with_stderr_contains("[CHECKING] d [..]")
        .with_stderr_contains("[CHECKING] e [..]")
        .with_stderr_contains("[CHECKING] f [..]")
        .run();

    p.cargo("check --all --force-rebuild")
        .with_stderr_contains("[CHECKING] a [..]")
        .with_stderr_contains("[CHECKING] b [..]")
        .with_stderr_contains("[CHECKING] c [..]")
        .with_stderr_contains("[CHECKING] d [..]")
        .with_stderr_contains("[CHECKING] e [..]")
        .with_stderr_contains("[CHECKING] f [..]")
        .run();
}
@alexcrichton

This comment has been minimized.

Copy link
Member

alexcrichton commented Feb 19, 2019

Thanks for the PR! If this ends up adding a new flag I think the only thing I'd add is that we'll want to be sure to put it behind a feature flag (like -Z unstable-options) since it's a new feature.

@Manishearth

This comment has been minimized.

Copy link
Member

Manishearth commented Feb 19, 2019

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 19, 2019

Yeah, it wouldn't be useable by stable Clippy until it's a stabilised feature in Cargo.

@Manishearth

This comment has been minimized.

Copy link
Member

Manishearth commented Feb 19, 2019

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 19, 2019

Again? When was this possible before?

@Manishearth

This comment has been minimized.

Copy link
Member

Manishearth commented Feb 19, 2019

@dwijnand

This comment has been minimized.

Copy link
Member

dwijnand commented Feb 19, 2019

The other alternative is to be like cargo fix and uplift cargo clippy into the cargo repo, where it can programmatically set force_rebuild without having to wait for --force-rebuild to be stable.

@Manishearth

This comment has been minimized.

Copy link
Member

Manishearth commented Feb 19, 2019

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 20, 2019

Thanks for the PR! If this ends up adding a new flag I think the only thing I'd add is that we'll want to be sure to put it behind a feature flag (like -Z unstable-options) since it's a new feature.

I just pushed a commit which takes care of this. I used the unstable build-plan flag as a guide, but I noticed that running cargo run -- build --help shows the build-plan option even without passing the Z to show unstable options. This behavior carries over in my implementation as well, not sure whether that is the intended behavior or not. However, if you actually try to use the force-rebuild flag without -Z unstable-options you'll get an error (this was confirmed with a unit test).

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 20, 2019

The test failure is in the meta_test_deep_trees_from_strategy test, so I think it will pass if retried.

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 20, 2019

@Nemo157 For some reason I am not able to duplicate your issue under cargo fix on the master branch. Somehow it only seems to affect cargo check as demonstrated in the failing test above (or I misconfigured the test for cargo fix).

Given that, perhaps we want to reconsider whether the fix merges with this PR? For now I've pushed the fix in a commit to a different branch (9b3138c), but if everyone agrees I can merge that commit into this branch/PR.

@Nemo157

This comment has been minimized.

Copy link
Contributor

Nemo157 commented Feb 20, 2019

cargo fix always working seems to be a side-effect of it implying --all-targets, maybe because it will include the cfg(test) mode build of each package which is not a dependency of the other targets. Using check --force-rebuild --all --all-targets in my local workspace correctly builds all local crates.

@alexcrichton

This comment has been minimized.

Copy link
Member

alexcrichton commented Feb 20, 2019

I think it's fine for clippy to use "unstable" features of Cargo, although perhaps not literally unstable features as they are today. In any case clippy, as an official tool now, should be fine to have support in Cargo which isn't intended for stable consumption elsewhere. (note that this interface, should it want to exist, would probably need to be designed rather than tacked on to something else existing).

I also think that moving cargo clippy into Cargo itself is fine so long as cargo clippy doesn't get enough development to put a strain on this repository.

@Manishearth

This comment has been minimized.

Copy link
Member

Manishearth commented Feb 20, 2019

@JoshMcguigan

This comment has been minimized.

Copy link
Author

JoshMcguigan commented Feb 21, 2019

Let me know if there needs to be any additional work here before merging. The test failure is in the meta_test_deep_trees_from_strategy test, so I think it will pass if retried.

@benesch

This comment has been minimized.

Copy link

benesch commented Mar 26, 2019

CI seems happy with this—is there anything preventing it from being merged?

@ehuss ehuss added the T-cargo label Mar 28, 2019

@ehuss

This comment has been minimized.

Copy link
Contributor

ehuss commented Mar 28, 2019

I don't think adding a flag of this nature is ultimately the direction that we want to go. Caching shouldn't take very long to add once it is ready upstream. cargo-clippy is in the process of moving into this repo where we will have some more flexibility for the clippy side of things.

I would like to propose the following: Merge this PR without any command-line flags and immediately stable. The intent would be that a caching implementation would remove this change in the near future. I'm not convinced that "running rustc again" is significantly painful. On cargo's source, for me with a warm cache, takes only a couple seconds. On servo, it takes about 7s. Clippy is a bit slower, but still reasonable.

@rfcbot fcp merge

If this fcp does not pass, I propose closing this PR instead and waiting for aforementioned items. Adding an unstable flag here will likely have the same timeframe as the caching implementation.

@rfcbot

This comment has been minimized.

Copy link
Collaborator

rfcbot commented Mar 28, 2019

Team member @ehuss has proposed to merge this. The next step is review by the rest of the tagged team members:

Concerns:

Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!

See this document for info about what commands tagged team members can give me.

@withoutboats

This comment has been minimized.

Copy link
Contributor

withoutboats commented Mar 28, 2019

I'm not sure about merging this. I have the experience (and maybe many others do) of habitually running commands in a sort of impulsive & subconscious way as I do my work. git status for example. If cargo check is one of these for someone (because normally it finishes instantaneously and only doesn't when theyve actually done something they want to check), this would be really disruptive for their work flow as they are now taken out of the flow by this command.

@joshtriplett

This comment has been minimized.

Copy link
Member

joshtriplett commented Mar 28, 2019

@rfcbot reviewed

Note that while I do want to see the effect of this, I'd like to see the caching as well (both in the normal and clippy-enabled cases, and in an incremental way). I regularly do clippy/edit/clippy cycles and would like those to go as quickly as possible.

@ehuss

This comment has been minimized.

Copy link
Contributor

ehuss commented Mar 29, 2019

Sounds like a reasonable concern.
@rfcbot concern habitual-check-performance

@joshtriplett

This comment has been minimized.

Copy link
Member

joshtriplett commented Mar 29, 2019

@ehuss @withoutboats I do think this is a great argument for caching. No good reason to rebuild just to display results. (And ideally, "fix one bug and recheck to get the rest" should go as quickly as possible and not take the same amount of time as the first check.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.