We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sometimes tests fail in GitHub actions due to timeouts, e.g. https://github.com/sourcefrog/cargo-mutants/runs/7932363158?check_suite_focus=true.
It's not always the same test. It might be more common on Windows but it does happen elsewhere.
I think this is because the CI machines are over-committed and sometimes slow down.
Options in order of preference:
It's possible that this is not just slow but actually somehow getting deadlocked?
failures: ---- cargo_mutants_in_replace_dependency_tree_passes stdout ---- thread 'cargo_mutants_in_replace_dependency_tree_passes' panicked at 'Unexpected failure. code-3 stderr=```""``` command=`"D:\\a\\cargo-mutants\\cargo-mutants\\target\\debug\\cargo-mutants.exe" "mutants" "--no-times" "--no-copy-target" "--no-shuffle" "-d" "testdata/tree/replace_dependency"` code=3 stdout="Copy source to scratch directory ... done\nUnmutated baseline ... ok\nFound 2 mutants to test\nsrc/lib.rs:6: replace is_even -> bool with false ... TIMEOUT\n2 mutants tested: 1 caught, 1 timeouts\n" stderr="" ', /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f\library\core\src\ops\function.rs:248:5 stack backtrace: 0: std::panicking::begin_panic_handler at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library\std\src\panicking.rs:584 1: core::panicking::panic_fmt at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library\core\src\panicking.rs:142 2: core::panicking::panic_display<assert_cmd::assert::AssertError> at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f\library\core\src\panicking.rs:72 3: assert_cmd::assert::AssertError::panic<assert_cmd::assert::Assert> at C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\assert_cmd-2.0.4\src\assert.rs:1036 4: core::ops::function::FnOnce::call_once<assert_cmd::assert::Assert (*)(assert_cmd::assert::AssertError),tuple$<assert_cmd::assert::AssertError> > at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f\library\core\src\ops\function.rs:248 5: enum$<core::result::Result<assert_cmd::assert::Assert,assert_cmd::assert::AssertError> >::unwrap_or_else<assert_cmd::assert::Assert,assert_cmd::assert::AssertError,assert_cmd::assert::Assert (*)(assert_cmd::assert::AssertError)> at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f\library\core\src\result.rs:1484 6: assert_cmd::assert::Assert::success at C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\assert_cmd-2.0.4\src\assert.rs:156 7: cli::cargo_mutants_in_replace_dependency_tree_passes at .\tests\cli.rs:906 8: cli::cargo_mutants_in_replace_dependency_tree_passes::closure$0 at .\tests\cli.rs:901 9: core::ops::function::FnOnce::call_once<cli::cargo_mutants_in_replace_dependency_tree_passes::closure_env$0,tuple$<> > at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f\library\core\src\ops\function.rs:248 10: core::ops::function::FnOnce::call_once at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library\core\src\ops\function.rs:248 note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. failures: cargo_mutants_in_replace_dependency_tree_passes test result: FAILED. 54 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in [143](https://github.com/sourcefrog/cargo-mutants/runs/7932363158?check_suite_focus=true#step:6:144).12s
The text was updated successfully, but these errors were encountered:
This might be now fixed with an increased time limit and also moving them to separate workspaces, which might allow more parallelism.
Sorry, something went wrong.
No branches or pull requests
Sometimes tests fail in GitHub actions due to timeouts, e.g. https://github.com/sourcefrog/cargo-mutants/runs/7932363158?check_suite_focus=true.
It's not always the same test. It might be more common on Windows but it does happen elsewhere.
I think this is because the CI machines are over-committed and sometimes slow down.
Options in order of preference:
It's possible that this is not just slow but actually somehow getting deadlocked?
The text was updated successfully, but these errors were encountered: