From b67516a47deceadd378fb09d4587a3154f2f1974 Mon Sep 17 00:00:00 2001 From: Miles Johnson Date: Thu, 23 May 2024 10:44:48 -0700 Subject: [PATCH] new: Implement a brand new task runner. (#1463) * Start on archives. * More impl work. * Add task command. * Add hashing. * More cache work. * Add logs. * Persist cache. * Add executor. * Add console. * Start on reporter. * Move handles. * Add default reporter. * Implement reporter. * Hook up reporter. * Fix output. * Sync outputs. * Start on command tests. * Add more tests. * Rely on fixtures. * Start on archive tests. * Finish archive tests. * Test scenarios. * Remove old runner. * Update lints. * Hook up reporter. * Improve reporter. * Add summary. * Clean up reporter. * Clean up attempt usage. * Fix issues. * More fixes. * More fixes. * More fixes. * Add exec tests. * Start on new utils. * Clean up more tests. * Rework args. * Start on runner tests. * Add more tests. * Test skip and archive. * Rework archive. * Rename things. * Rework run. * Rename attempt to operation. * Rework op output. * Fix package. * Fix some tests. * Start on run tests. * Add more tests. * Fix some tests. * Fix webhooks. * Fixes. * Fix windows. * Fix moonbase. --- .yarn/versions/d3fb7bae.yml | 2 +- Cargo.lock | 92 +- Cargo.toml | 4 + crates/bun/lang/Cargo.toml | 3 + crates/bun/platform/Cargo.toml | 3 + crates/bun/tool/Cargo.toml | 3 + crates/cli/Cargo.toml | 6 +- crates/cli/src/commands/check.rs | 8 + crates/cli/src/commands/ci.rs | 23 +- crates/cli/src/commands/run.rs | 12 +- crates/cli/src/systems/startup.rs | 6 +- crates/cli/tests/run_bun_test.rs | 4 +- crates/cli/tests/run_system_test.rs | 26 +- crates/cli/tests/run_test.rs | 63 +- crates/cli/tests/run_webhooks_test.rs | 6 +- ...un__handles_process_exit_code_nonzero.snap | 6 +- ...st__bun__handles_process_exit_nonzero.snap | 6 +- ..._test__bun__handles_unhandled_promise.snap | 6 +- ...no__handles_process_exit_code_nonzero.snap | 6 +- ...test__deno__handles_unhandled_promise.snap | 6 +- ...st__handles_process_exit_code_nonzero.snap | 6 +- ...de_test__handles_process_exit_nonzero.snap | 6 +- ..._node_test__handles_unhandled_promise.snap | 6 +- ...st_test__handles_process_exit_nonzero.snap | 6 +- ...hing__uses_cache_on_subsequent_runs-2.snap | 2 +- ...t__unix__handles_process_exit_nonzero.snap | 6 +- ...__handles_process_exit_nonzero_inline.snap | 6 +- ...__unix__retries_on_failure_till_count.snap | 17 +- ...hing__uses_cache_on_subsequent_runs-2.snap | 2 +- ...windows__handles_process_exit_nonzero.snap | 6 +- ...indows__retries_on_failure_till_count.snap | 17 +- ..._dependencies__can_run_deps_in_serial.snap | 8 +- ...ges_primary_hash_if_deps_hash_changes.snap | 8 +- ...nerates_unique_hashes_for_each_target.snap | 4 +- ...dependencies__runs_the_graph_in_order.snap | 8 +- ...runs_the_graph_in_order_not_from_head.snap | 8 +- .../snapshots/run_test__noop__runs_noop.snap | 8 +- .../run_test__output_styles__buffer.snap | 6 +- ...tyles__buffer_on_failure_when_failure.snap | 10 +- .../run_test__output_styles__hash.snap | 2 +- ...ion__reuses_cache_from_previous_run-2.snap | 5 +- crates/core/action-pipeline/Cargo.toml | 5 +- .../action-pipeline/src/actions/run_task.rs | 122 +- crates/core/action-pipeline/src/lib.rs | 18 - crates/core/action-pipeline/src/pipeline.rs | 284 ++--- crates/core/action-pipeline/src/processor.rs | 14 +- .../src/subscribers/local_cache.rs | 76 +- .../src/subscribers/moonbase.rs | 174 +-- crates/core/actions/Cargo.toml | 3 + crates/core/emitter/Cargo.toml | 3 + crates/core/emitter/src/event.rs | 44 +- crates/core/lang/Cargo.toml | 3 + crates/core/logger/Cargo.toml | 3 + crates/core/moon/Cargo.toml | 3 + crates/core/notifier/Cargo.toml | 3 + crates/core/platform/Cargo.toml | 3 + crates/core/runner/Cargo.toml | 41 - crates/core/runner/src/lib.rs | 7 - crates/core/runner/src/run_state.rs | 146 --- crates/core/runner/src/runner.rs | 1015 --------------- crates/core/runner/tests/runner_test.rs | 96 -- crates/core/test-utils/Cargo.toml | 3 + crates/core/test-utils/src/cli.rs | 2 +- crates/core/tool/Cargo.toml | 3 + crates/core/utils/Cargo.toml | 3 + crates/deno/lang/Cargo.toml | 3 + crates/deno/platform/Cargo.toml | 3 + crates/deno/tool/Cargo.toml | 3 + crates/javascript/platform/Cargo.toml | 3 + crates/node/lang/Cargo.toml | 3 + crates/node/platform/Cargo.toml | 3 + crates/node/tool/Cargo.toml | 3 + crates/rust/lang/Cargo.toml | 3 + crates/rust/platform/Cargo.toml | 3 + crates/rust/tool/Cargo.toml | 3 + crates/system/platform/Cargo.toml | 3 + crates/typescript/lang/Cargo.toml | 3 + crates/typescript/platform/Cargo.toml | 3 + nextgen/action-context/Cargo.toml | 3 + nextgen/action-context/src/lib.rs | 22 +- nextgen/action-graph/Cargo.toml | 3 + nextgen/action-graph/src/action_graph.rs | 4 + nextgen/action/Cargo.toml | 3 + nextgen/action/src/action.rs | 43 +- nextgen/action/src/attempt.rs | 60 - nextgen/action/src/lib.rs | 6 +- nextgen/action/src/operation.rs | 165 +++ nextgen/action/src/operation_list.rs | 94 ++ nextgen/api/Cargo.toml | 4 + nextgen/api/src/moonbase/mod.rs | 141 ++- nextgen/app-components/Cargo.toml | 3 + nextgen/app/Cargo.toml | 3 + nextgen/args/Cargo.toml | 3 + nextgen/cache-item/Cargo.toml | 3 + nextgen/cache/Cargo.toml | 3 + nextgen/codegen/Cargo.toml | 3 + nextgen/codeowners/Cargo.toml | 3 + nextgen/common/Cargo.toml | 3 + nextgen/common/src/env.rs | 4 +- nextgen/config/Cargo.toml | 3 + nextgen/config/src/toolchain_config.rs | 53 + nextgen/console-reporter/Cargo.toml | 21 + .../console-reporter/src/default_reporter.rs | 448 +++++++ nextgen/console-reporter/src/lib.rs | 3 + nextgen/console/Cargo.toml | 6 + nextgen/console/src/buffer.rs | 200 +++ nextgen/console/src/console.rs | 270 +--- nextgen/console/src/lib.rs | 4 + nextgen/console/src/printer.rs | 3 +- nextgen/console/src/reporter.rs | 98 ++ nextgen/env/Cargo.toml | 3 + nextgen/extension-plugin/Cargo.toml | 3 + nextgen/file-group/Cargo.toml | 3 + nextgen/hash/Cargo.toml | 3 + nextgen/pdk-api/Cargo.toml | 3 + nextgen/pdk-test-utils/Cargo.toml | 3 + nextgen/pdk/Cargo.toml | 3 + nextgen/pipeline/Cargo.toml | 3 + nextgen/platform-detector/Cargo.toml | 3 + nextgen/platform-plugin/Cargo.toml | 3 + nextgen/platform-runtime/Cargo.toml | 3 + nextgen/plugin/Cargo.toml | 3 + nextgen/process/Cargo.toml | 3 + nextgen/project-builder/Cargo.toml | 3 + nextgen/project-constraints/Cargo.toml | 3 + nextgen/project-expander/Cargo.toml | 3 + nextgen/project-graph/Cargo.toml | 3 + nextgen/project/Cargo.toml | 3 + nextgen/query/Cargo.toml | 3 + nextgen/target/Cargo.toml | 3 + nextgen/task-builder/Cargo.toml | 3 + nextgen/task-hasher/Cargo.toml | 3 + nextgen/task-hasher/src/task_hash.rs | 6 +- .../tests/__fixtures__/inputs/moon.yml | 10 + nextgen/task-hasher/tests/task_hasher_test.rs | 51 + nextgen/task-runner/Cargo.toml | 43 + nextgen/task-runner/src/command_builder.rs | 262 ++++ nextgen/task-runner/src/command_executor.rs | 310 +++++ nextgen/task-runner/src/lib.rs | 12 + nextgen/task-runner/src/output_archiver.rs | 188 +++ nextgen/task-runner/src/output_hydrater.rs | 110 ++ nextgen/task-runner/src/run_state.rs | 10 + nextgen/task-runner/src/task_runner.rs | 651 ++++++++++ .../task-runner/src/task_runner_error.rs | 22 +- .../__fixtures__/archive/.moon/workspace.yml | 2 + .../__fixtures__/archive/project/moon.yml | 89 ++ .../__fixtures__/builder/.moon/workspace.yml | 2 + .../__fixtures__/builder/project/input.txt | 0 .../__fixtures__/builder/project/moon.yml | 9 + .../__fixtures__/runner/.moon/workspace.yml | 4 + .../__fixtures__/runner/project/moon.yml | 24 + .../tests/__fixtures__/runner/unix/moon.yml | 52 + .../__fixtures__/runner/windows/moon.yml | 52 + .../task-runner/tests/command_builder_test.rs | 371 ++++++ .../tests/command_executor_test.rs | 102 ++ .../task-runner/tests/output_archiver_test.rs | 534 ++++++++ .../task-runner/tests/output_hydrater_test.rs | 110 ++ nextgen/task-runner/tests/task_runner_test.rs | 1096 +++++++++++++++++ nextgen/task-runner/tests/utils.rs | 187 +++ nextgen/task/Cargo.toml | 3 + nextgen/task/src/task.rs | 27 +- nextgen/test-utils/Cargo.toml | 5 +- nextgen/time/Cargo.toml | 3 + nextgen/time/src/lib.rs | 22 +- nextgen/vcs-hooks/Cargo.toml | 3 + nextgen/vcs/Cargo.toml | 3 + nextgen/vcs/src/git.rs | 10 +- nextgen/vcs/src/process_cache.rs | 12 + nextgen/workspace/Cargo.toml | 3 + packages/cli/CHANGELOG.md | 25 + packages/report/src/action.ts | 11 +- packages/report/tests/action.test.ts | 33 +- packages/report/tests/report.test.ts | 8 +- packages/types/src/events.ts | 67 +- packages/types/src/pipeline.ts | 32 +- rust-toolchain.toml | 2 +- website/docs/concepts/cache.mdx | 2 +- website/docs/guides/webhooks.mdx | 152 --- 178 files changed, 6226 insertions(+), 2808 deletions(-) delete mode 100644 crates/core/runner/Cargo.toml delete mode 100644 crates/core/runner/src/lib.rs delete mode 100644 crates/core/runner/src/run_state.rs delete mode 100644 crates/core/runner/src/runner.rs delete mode 100644 crates/core/runner/tests/runner_test.rs delete mode 100644 nextgen/action/src/attempt.rs create mode 100644 nextgen/action/src/operation.rs create mode 100644 nextgen/action/src/operation_list.rs create mode 100644 nextgen/console-reporter/Cargo.toml create mode 100644 nextgen/console-reporter/src/default_reporter.rs create mode 100644 nextgen/console-reporter/src/lib.rs create mode 100644 nextgen/console/src/buffer.rs create mode 100644 nextgen/console/src/reporter.rs create mode 100644 nextgen/task-runner/Cargo.toml create mode 100644 nextgen/task-runner/src/command_builder.rs create mode 100644 nextgen/task-runner/src/command_executor.rs create mode 100644 nextgen/task-runner/src/lib.rs create mode 100644 nextgen/task-runner/src/output_archiver.rs create mode 100644 nextgen/task-runner/src/output_hydrater.rs create mode 100644 nextgen/task-runner/src/run_state.rs create mode 100644 nextgen/task-runner/src/task_runner.rs rename crates/core/runner/src/errors.rs => nextgen/task-runner/src/task_runner_error.rs (52%) create mode 100644 nextgen/task-runner/tests/__fixtures__/archive/.moon/workspace.yml create mode 100644 nextgen/task-runner/tests/__fixtures__/archive/project/moon.yml create mode 100644 nextgen/task-runner/tests/__fixtures__/builder/.moon/workspace.yml create mode 100644 nextgen/task-runner/tests/__fixtures__/builder/project/input.txt create mode 100644 nextgen/task-runner/tests/__fixtures__/builder/project/moon.yml create mode 100644 nextgen/task-runner/tests/__fixtures__/runner/.moon/workspace.yml create mode 100644 nextgen/task-runner/tests/__fixtures__/runner/project/moon.yml create mode 100644 nextgen/task-runner/tests/__fixtures__/runner/unix/moon.yml create mode 100644 nextgen/task-runner/tests/__fixtures__/runner/windows/moon.yml create mode 100644 nextgen/task-runner/tests/command_builder_test.rs create mode 100644 nextgen/task-runner/tests/command_executor_test.rs create mode 100644 nextgen/task-runner/tests/output_archiver_test.rs create mode 100644 nextgen/task-runner/tests/output_hydrater_test.rs create mode 100644 nextgen/task-runner/tests/task_runner_test.rs create mode 100644 nextgen/task-runner/tests/utils.rs diff --git a/.yarn/versions/d3fb7bae.yml b/.yarn/versions/d3fb7bae.yml index c676bb95f79..b2630cb3904 100644 --- a/.yarn/versions/d3fb7bae.yml +++ b/.yarn/versions/d3fb7bae.yml @@ -7,10 +7,10 @@ releases: '@moonrepo/core-macos-arm64': minor '@moonrepo/core-macos-x64': minor '@moonrepo/core-windows-x64-msvc': minor + '@moonrepo/report': patch '@moonrepo/types': patch declined: - '@moonrepo/nx-compat' - - '@moonrepo/report' - '@moonrepo/runtime' - website diff --git a/Cargo.lock b/Cargo.lock index e0f0162df92..edde361e559 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -3116,8 +3116,8 @@ dependencies = [ "moon_process", "moon_project", "moon_project_graph", - "moon_runner", "moon_target", + "moon_task_runner", "moon_test_utils", "moon_tool", "moon_utils", @@ -3164,6 +3164,7 @@ dependencies = [ "moon_time", "proto_core", "reqwest 0.12.3", + "rustc-hash", "semver", "serde", "starbase_utils", @@ -3342,6 +3343,7 @@ dependencies = [ "moon_common", "moon_config", "moon_console", + "moon_console_reporter", "moon_deno_lang", "moon_deno_tool", "moon_env", @@ -3357,12 +3359,12 @@ dependencies = [ "moon_project", "moon_project_graph", "moon_query", - "moon_runner", "moon_rust_lang", "moon_rust_tool", "moon_system_platform", "moon_target", "moon_task", + "moon_task_runner", "moon_test_utils", "moon_tool", "moon_typescript_lang", @@ -3473,12 +3475,28 @@ version = "0.0.1" dependencies = [ "inquire", "miette", + "moon_action", "moon_common", + "moon_config", + "moon_target", "parking_lot", "starbase", "starbase_styles", ] +[[package]] +name = "moon_console_reporter" +version = "0.0.1" +dependencies = [ + "miette", + "moon_action", + "moon_common", + "moon_config", + "moon_console", + "moon_target", + "moon_time", +] + [[package]] name = "moon_deno_lang" version = "0.0.1" @@ -4000,44 +4018,6 @@ dependencies = [ "thiserror", ] -[[package]] -name = "moon_runner" -version = "0.0.1" -dependencies = [ - "console", - "miette", - "moon", - "moon_action", - "moon_action_context", - "moon_cache_item", - "moon_common", - "moon_config", - "moon_console", - "moon_emitter", - "moon_hash", - "moon_logger", - "moon_platform", - "moon_platform_runtime", - "moon_process", - "moon_project", - "moon_target", - "moon_task", - "moon_task_hasher", - "moon_test_utils", - "moon_tool", - "moon_utils", - "moon_vcs", - "moon_workspace", - "rustc-hash", - "serde", - "serde_json", - "starbase_archive", - "starbase_styles", - "starbase_utils", - "thiserror", - "tokio", -] - [[package]] name = "moon_rust_lang" version = "0.0.1" @@ -4190,6 +4170,38 @@ dependencies = [ "tracing", ] +[[package]] +name = "moon_task_runner" +version = "0.0.1" +dependencies = [ + "miette", + "moon_action", + "moon_action_context", + "moon_api", + "moon_cache", + "moon_cache_item", + "moon_common", + "moon_config", + "moon_console", + "moon_platform", + "moon_platform_runtime", + "moon_process", + "moon_project", + "moon_task", + "moon_task_hasher", + "moon_test_utils2", + "moon_time", + "moon_workspace", + "proto_core", + "serde", + "starbase_archive", + "starbase_sandbox", + "starbase_utils", + "thiserror", + "tokio", + "tracing", +] + [[package]] name = "moon_test_utils" version = "0.0.1" diff --git a/Cargo.toml b/Cargo.toml index 4aabeedee60..7d566dac112 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -108,3 +108,7 @@ warpgate_pdk = "0.3.1" # warpgate = { path = "../proto/crates/warpgate" } # warpgate_api = { path = "../proto/crates/warpgate-api" } # warpgate_pdk = { path = "../proto/crates/warpgate-pdk" } + +[workspace.lints.clippy] +# Very noisy, lots of false positives! +assigning_clones = "allow" diff --git a/crates/bun/lang/Cargo.toml b/crates/bun/lang/Cargo.toml index a7f91a39256..811d4df0ab3 100644 --- a/crates/bun/lang/Cargo.toml +++ b/crates/bun/lang/Cargo.toml @@ -10,3 +10,6 @@ cached = { workspace = true } miette = { workspace = true } rustc-hash = { workspace = true } yarn-lock-parser = "0.7.0" + +[lints] +workspace = true diff --git a/crates/bun/platform/Cargo.toml b/crates/bun/platform/Cargo.toml index 95fa19bca1a..a0a87e2fcb2 100644 --- a/crates/bun/platform/Cargo.toml +++ b/crates/bun/platform/Cargo.toml @@ -37,3 +37,6 @@ tokio = { workspace = true } moon = { path = "../../core/moon" } moon_project_graph = { path = "../../../nextgen/project-graph" } moon_test_utils = { path = "../../core/test-utils" } + +[lints] +workspace = true diff --git a/crates/bun/tool/Cargo.toml b/crates/bun/tool/Cargo.toml index 4d857702485..58b6c3e7c5d 100644 --- a/crates/bun/tool/Cargo.toml +++ b/crates/bun/tool/Cargo.toml @@ -17,3 +17,6 @@ miette = { workspace = true } proto_core = { workspace = true } rustc-hash = { workspace = true } starbase_utils = { workspace = true } + +[lints] +workspace = true diff --git a/crates/cli/Cargo.toml b/crates/cli/Cargo.toml index 04aa30d36d7..94699e98bab 100644 --- a/crates/cli/Cargo.toml +++ b/crates/cli/Cargo.toml @@ -28,6 +28,7 @@ moon_codegen = { path = "../../nextgen/codegen" } moon_common = { path = "../../nextgen/common" } moon_config = { path = "../../nextgen/config", features = ["template"] } moon_console = { path = "../../nextgen/console" } +moon_console_reporter = { path = "../../nextgen/console-reporter" } moon_deno_lang = { path = "../deno/lang" } moon_deno_tool = { path = "../deno/tool" } moon_env = { path = "../../nextgen/env" } @@ -95,8 +96,11 @@ tracing = { workspace = true } [dev-dependencies] moon_cache = { path = "../../nextgen/cache" } moon_notifier = { path = "../core/notifier" } -moon_runner = { path = "../core/runner" } +moon_task_runner = { path = "../../nextgen/task-runner" } moon_test_utils = { path = "../core/test-utils" } httpmock = "0.7.0" serial_test = "3.0.0" starbase_archive = { workspace = true } + +[lints] +workspace = true diff --git a/crates/cli/src/commands/check.rs b/crates/cli/src/commands/check.rs index 63c896f2fee..5b176efffb9 100644 --- a/crates/cli/src/commands/check.rs +++ b/crates/cli/src/commands/check.rs @@ -21,6 +21,13 @@ pub struct CheckArgs { #[clap(group = "projects")] all: bool, + #[arg( + long, + short = 's', + help = "Include a summary of all actions that were processed in the pipeline" + )] + pub summary: bool, + #[arg( short = 'u', long = "updateCache", @@ -77,6 +84,7 @@ pub async fn check( run_target( &targets, &RunArgs { + summary: args.summary, update_cache: args.update_cache, ..RunArgs::default() }, diff --git a/crates/cli/src/commands/ci.rs b/crates/cli/src/commands/ci.rs index 70f4ed941ed..570e0124af8 100644 --- a/crates/cli/src/commands/ci.rs +++ b/crates/cli/src/commands/ci.rs @@ -287,6 +287,7 @@ pub async fn ci(args: ArgsRef, global_args: StateRef, resour } let results = pipeline + .summarize(true) .generate_report("ciReport.json") .run( action_graph, @@ -297,21 +298,13 @@ pub async fn ci(args: ArgsRef, global_args: StateRef, resour console.print_footer()?; - // Print out a summary of any failures - console.print_header("Summary")?; - - pipeline.render_summary(&results, console.inner)?; - - console.print_footer()?; - - // Print out the results and exit if an error occurs - console.print_header("Stats")?; - - let failed = pipeline.render_results(&results, console.inner)?; - - pipeline.render_stats(&results, console.inner, false)?; - - console.print_footer()?; + let failed = results.iter().any(|result| { + if result.has_failed() { + !result.allow_failure + } else { + false + } + }); if failed { return Err(ExitCode(1).into()); diff --git a/crates/cli/src/commands/run.rs b/crates/cli/src/commands/run.rs index 0f33c527043..ec3a068c7b8 100644 --- a/crates/cli/src/commands/run.rs +++ b/crates/cli/src/commands/run.rs @@ -44,6 +44,13 @@ pub struct RunArgs { #[arg(long, help = "Focus target(s) based on the result of a query")] pub query: Option, + #[arg( + long, + short = 's', + help = "Include a summary of all actions that were processed in the pipeline" + )] + pub summary: bool, + #[arg( short = 'u', long = "updateCache", @@ -218,14 +225,13 @@ pub async fn run_target( pipeline.concurrency(concurrency); } - let results = pipeline + pipeline .bail_on_error() + .summarize(args.summary) .generate_report("runReport.json") .run(action_graph, Arc::new(console.to_owned()), Some(context)) .await?; - pipeline.render_stats(&results, console, true)?; - Ok(()) } diff --git a/crates/cli/src/systems/startup.rs b/crates/cli/src/systems/startup.rs index 8690fd43388..bfe65617b32 100644 --- a/crates/cli/src/systems/startup.rs +++ b/crates/cli/src/systems/startup.rs @@ -2,6 +2,7 @@ use crate::app::GlobalArgs; use moon_app_components::{Console, ExtensionRegistry, MoonEnv, ProtoEnv, WorkspaceRoot}; use moon_common::{consts::PROTO_CLI_VERSION, is_test_env, path::exe_name}; use moon_console::Checkpoint; +use moon_console_reporter::DefaultReporter; use moon_env::MoonEnvironment; use moon_platform_plugin::PlatformRegistry; use moon_plugin::{PluginRegistry, PluginType}; @@ -23,7 +24,10 @@ pub async fn load_environments(states: StatesMut, resources: ResourcesMut) { states.set(MoonEnv(Arc::new(MoonEnvironment::new()?))); states.set(ProtoEnv(Arc::new(ProtoEnvironment::new()?))); - resources.set(Console::new(quiet)); + let mut console = Console::new(quiet); + console.set_reporter(DefaultReporter::default()); + + resources.set(console); } #[system] diff --git a/crates/cli/tests/run_bun_test.rs b/crates/cli/tests/run_bun_test.rs index f2a76471b14..92f64c31710 100644 --- a/crates/cli/tests/run_bun_test.rs +++ b/crates/cli/tests/run_bun_test.rs @@ -337,11 +337,11 @@ mod bun { } } + // Need multiple windows versions for this to work + #[cfg(not(windows))] mod workspace_overrides { use super::*; - // Need multiple windows versions for this to work - #[cfg(not(windows))] #[test] fn can_override_version() { let sandbox = bun_sandbox(); diff --git a/crates/cli/tests/run_system_test.rs b/crates/cli/tests/run_system_test.rs index c3bca56e4e2..22ab82e7069 100644 --- a/crates/cli/tests/run_system_test.rs +++ b/crates/cli/tests/run_system_test.rs @@ -1,13 +1,11 @@ -use itertools::Itertools; use moon_common::Id; use moon_config::{PartialInheritedTasksConfig, PartialWorkspaceConfig, PartialWorkspaceProjects}; -use moon_runner::RunTargetState; +use moon_task_runner::TaskRunState; use moon_test_utils::{ assert_snapshot, create_sandbox_with_config, predicates::prelude::*, Sandbox, }; use rustc_hash::FxHashMap; use starbase_utils::json; -use std::fs; fn system_sandbox() -> Sandbox { let workspace_config = PartialWorkspaceConfig { @@ -34,6 +32,8 @@ fn system_sandbox() -> Sandbox { #[cfg(not(windows))] mod unix { use super::*; + use itertools::Itertools; + use std::fs; #[test] fn handles_echo() { @@ -312,21 +312,13 @@ mod unix { assert!(cache_path.exists()); - let state: RunTargetState = json::read_file(cache_path).unwrap(); + let state: TaskRunState = json::read_file(cache_path).unwrap(); assert!(sandbox .path() .join(".moon/cache/outputs") .join(format!("{}.tar.gz", state.hash)) .exists()); - assert!(sandbox - .path() - .join(".moon/cache/states/unix/outputs/stdout.log") - .exists()); - assert!(sandbox - .path() - .join(".moon/cache/states/unix/outputs/stderr.log") - .exists()); } } @@ -618,21 +610,13 @@ mod windows { assert!(cache_path.exists()); - let state: RunTargetState = json::read_file(cache_path).unwrap(); + let state: TaskRunState = json::read_file(cache_path).unwrap(); assert!(sandbox .path() .join(".moon/cache/outputs") .join(format!("{}.tar.gz", state.hash)) .exists()); - assert!(sandbox - .path() - .join(".moon/cache/states/windows/outputs/stdout.log") - .exists()); - assert!(sandbox - .path() - .join(".moon/cache/states/windows/outputs/stderr.log") - .exists()); } } } diff --git a/crates/cli/tests/run_test.rs b/crates/cli/tests/run_test.rs index 229642aaa4e..d4ec8fdac4a 100644 --- a/crates/cli/tests/run_test.rs +++ b/crates/cli/tests/run_test.rs @@ -3,8 +3,8 @@ use moon_config::{ HasherWalkStrategy, PartialCodeownersConfig, PartialHasherConfig, PartialRunnerConfig, PartialVcsConfig, PartialWorkspaceConfig, VcsProvider, }; -use moon_runner::RunTargetState; use moon_target::Target; +use moon_task_runner::TaskRunState; use moon_test_utils::{ assert_debug_snapshot, assert_snapshot, create_sandbox_with_config, get_cases_fixture_configs, predicates::{self, prelude::*}, @@ -44,7 +44,7 @@ where fn extract_hash_from_run(fixture: &Path, target_id: &str) -> String { let engine = CacheEngine::new(fixture).unwrap(); - let cache: RunTargetState = json::read_file( + let cache: TaskRunState = json::read_file( engine .state .states_dir @@ -219,8 +219,6 @@ fn runs_task_with_a_mutex_in_sequence() { let stop = start.elapsed(); - dbg!(&start, &stop); - assert!(stop.as_millis() > 3000); } @@ -539,6 +537,7 @@ mod target_scopes { let assert = sandbox.run_moon(|cmd| { cmd.arg("run").arg("targetScopeA:deps"); }); + let output = assert.output(); assert!(predicate::str::contains("targetScopeA:deps").eval(&output)); @@ -723,11 +722,11 @@ mod hashing { // Hashes change because `.moon/workspace.yml` is different from `walk_strategy` assert_eq!( hash_vcs, - "4f65a90e3f44c850eda3e7dd64f801d743a8bef29a6fcc5231369e55cfa43ee9" + "e48c523b3efef26cfb935135d3476b78aefe19d5f955be85373926b408237e8b" ); assert_eq!( hash_glob, - "c0a7c576081e08902e8bdf4b191ca6621be4274f808081d78dc77619df058f4a" + "fb15a246e64607ae039118df9790042424dc8f0ef64295df41a582f6716e66dd" ); } } @@ -753,7 +752,7 @@ mod outputs { let output = assert.output(); assert!( - predicate::str::contains("Target outputs:missingOutput defines outputs").eval(&output) + predicate::str::contains("Task outputs:missingOutput defines outputs").eval(&output) ); } @@ -769,7 +768,7 @@ mod outputs { let output = assert.output(); assert!( - predicate::str::contains("Target outputs:missingOutputGlob defines outputs") + predicate::str::contains("Task outputs:missingOutputGlob defines outputs") .eval(&output) ); } @@ -962,8 +961,6 @@ mod outputs { untar(&tarball, &dir); - assert!(dir.join("stdout.log").exists()); - assert!(dir.join("stderr.log").exists()); assert!(dir.join("outputs/both/a/one.js").exists()); assert!(dir.join("outputs/both/b/two.js").exists()); } @@ -986,8 +983,6 @@ mod outputs { untar(&tarball, &dir); - assert!(dir.join("stdout.log").exists()); - assert!(dir.join("stderr.log").exists()); assert!(dir.join("both/a/one.js").exists()); assert!(dir.join("both/b/two.js").exists()); } @@ -1014,28 +1009,6 @@ mod outputs { assert!(!dir.join("outputs/both/b/two.js").exists()); } - #[test] - fn caches_output_logs_in_tarball() { - let sandbox = cases_sandbox(); - sandbox.enable_git(); - - sandbox.run_moon(|cmd| { - cmd.arg("run").arg("outputs:generateFile"); - }); - - let hash = extract_hash_from_run(sandbox.path(), "outputs:generateFile"); - let tarball = sandbox - .path() - .join(".moon/cache/outputs") - .join(format!("{hash}.tar.gz")); - let dir = sandbox.path().join(".moon/cache/outputs").join(hash); - - untar(&tarball, &dir); - - assert!(dir.join("stdout.log").exists()); - assert!(dir.join("stderr.log").exists()); - } - #[test] fn can_bypass_cache() { let sandbox = cases_sandbox(); @@ -1049,13 +1022,13 @@ mod outputs { cmd.arg("run").arg("outputs:generateFixed"); }); - assert!(predicate::str::contains("cached from previous run").eval(&assert.output())); + assert!(predicate::str::contains("cached").eval(&assert.output())); let assert = sandbox.run_moon(|cmd| { cmd.arg("run").arg("outputs:generateFixed").arg("-u"); }); - assert!(!predicate::str::contains("cached from previous run").eval(&assert.output())); + assert!(!predicate::str::contains("cached").eval(&assert.output())); } mod hydration { @@ -1413,24 +1386,6 @@ mod noop { assert_snapshot!(assert.output()); } - - #[test] - fn caches_noop() { - let sandbox = cases_sandbox(); - sandbox.enable_git(); - - sandbox.run_moon(|cmd| { - cmd.arg("run").arg("noop:noop"); - }); - - let hash = extract_hash_from_run(sandbox.path(), "noop:noop"); - - assert!(sandbox - .path() - .join(".moon/cache/hashes") - .join(format!("{hash}.json")) - .exists()); - } } mod root_level { diff --git a/crates/cli/tests/run_webhooks_test.rs b/crates/cli/tests/run_webhooks_test.rs index 837082ea691..926c7448ed6 100644 --- a/crates/cli/tests/run_webhooks_test.rs +++ b/crates/cli/tests/run_webhooks_test.rs @@ -35,7 +35,7 @@ async fn sends_webhooks() { cmd.arg("run").arg("node:cjs"); }); - mock.assert_hits(23); + mock.assert_hits(22); assert.success(); } @@ -60,7 +60,7 @@ async fn sends_webhooks_for_cache_events() { cmd.arg("run").arg("node:cjs"); }); - mock.assert_hits(45); + mock.assert_hits(44); assert.success(); } @@ -101,5 +101,5 @@ async fn all_webhooks_have_same_uuid() { cmd.arg("run").arg("node:cjs"); }); - mock.assert_hits(23); + mock.assert_hits(22); } diff --git a/crates/cli/tests/snapshots/run_bun_test__bun__handles_process_exit_code_nonzero.snap b/crates/cli/tests/snapshots/run_bun_test__bun__handles_process_exit_code_nonzero.snap index 473f4e0d6ad..a892fd9b4a0 100644 --- a/crates/cli/tests/snapshots/run_bun_test__bun__handles_process_exit_code_nonzero.snap +++ b/crates/cli/tests/snapshots/run_bun_test__bun__handles_process_exit_code_nonzero.snap @@ -10,9 +10,5 @@ This should appear! stderr Error: task_runner::run_failed - × Task bun:exitCodeNonZero failed to run. View hash details with moon query - │ hash hash1234. + × Task bun:exitCodeNonZero failed to run. ╰─▶ Process bun failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_bun_test__bun__handles_process_exit_nonzero.snap b/crates/cli/tests/snapshots/run_bun_test__bun__handles_process_exit_nonzero.snap index e1e72acc521..7d144990645 100644 --- a/crates/cli/tests/snapshots/run_bun_test__bun__handles_process_exit_nonzero.snap +++ b/crates/cli/tests/snapshots/run_bun_test__bun__handles_process_exit_nonzero.snap @@ -9,9 +9,5 @@ stdout stderr Error: task_runner::run_failed - × Task bun:processExitNonZero failed to run. View hash details with moon - │ query hash hash1234. + × Task bun:processExitNonZero failed to run. ╰─▶ Process bun failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_bun_test__bun__handles_unhandled_promise.snap b/crates/cli/tests/snapshots/run_bun_test__bun__handles_unhandled_promise.snap index 333f09338ab..e95999aa8a3 100644 --- a/crates/cli/tests/snapshots/run_bun_test__bun__handles_unhandled_promise.snap +++ b/crates/cli/tests/snapshots/run_bun_test__bun__handles_unhandled_promise.snap @@ -10,9 +10,5 @@ stderr error: Oops Error: task_runner::run_failed - × Task bun:unhandledPromise failed to run. View hash details with moon query - │ hash hash1234. + × Task bun:unhandledPromise failed to run. ╰─▶ Process bun failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_deno_test__deno__handles_process_exit_code_nonzero.snap b/crates/cli/tests/snapshots/run_deno_test__deno__handles_process_exit_code_nonzero.snap index 3b6faec1b01..aa71727c90a 100644 --- a/crates/cli/tests/snapshots/run_deno_test__deno__handles_process_exit_code_nonzero.snap +++ b/crates/cli/tests/snapshots/run_deno_test__deno__handles_process_exit_code_nonzero.snap @@ -8,9 +8,5 @@ stdout stderr Error: task_runner::run_failed - × Task deno:exitCodeNonZero failed to run. View hash details with moon query - │ hash hash1234. + × Task deno:exitCodeNonZero failed to run. ╰─▶ Process deno failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_deno_test__deno__handles_unhandled_promise.snap b/crates/cli/tests/snapshots/run_deno_test__deno__handles_unhandled_promise.snap index 548d1f7297a..5d23baa4ade 100644 --- a/crates/cli/tests/snapshots/run_deno_test__deno__handles_unhandled_promise.snap +++ b/crates/cli/tests/snapshots/run_deno_test__deno__handles_unhandled_promise.snap @@ -9,9 +9,5 @@ stderr error: Uncaught (in promise) "Oops" Error: task_runner::run_failed - × Task deno:unhandledPromise failed to run. View hash details with moon - │ query hash hash1234. + × Task deno:unhandledPromise failed to run. ╰─▶ Process deno failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_node_test__handles_process_exit_code_nonzero.snap b/crates/cli/tests/snapshots/run_node_test__handles_process_exit_code_nonzero.snap index e1bbb133bd2..4acd28850be 100644 --- a/crates/cli/tests/snapshots/run_node_test__handles_process_exit_code_nonzero.snap +++ b/crates/cli/tests/snapshots/run_node_test__handles_process_exit_code_nonzero.snap @@ -10,9 +10,5 @@ This should appear! stderr Error: task_runner::run_failed - × Task node:exitCodeNonZero failed to run. View hash details with moon query - │ hash hash1234. + × Task node:exitCodeNonZero failed to run. ╰─▶ Process node failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_node_test__handles_process_exit_nonzero.snap b/crates/cli/tests/snapshots/run_node_test__handles_process_exit_nonzero.snap index 15e6e542452..bc6d878c565 100644 --- a/crates/cli/tests/snapshots/run_node_test__handles_process_exit_nonzero.snap +++ b/crates/cli/tests/snapshots/run_node_test__handles_process_exit_nonzero.snap @@ -9,9 +9,5 @@ stdout stderr Error: task_runner::run_failed - × Task node:processExitNonZero failed to run. View hash details with moon - │ query hash hash1234. + × Task node:processExitNonZero failed to run. ╰─▶ Process node failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_node_test__handles_unhandled_promise.snap b/crates/cli/tests/snapshots/run_node_test__handles_unhandled_promise.snap index 140a2ff9a4e..5280c356af7 100644 --- a/crates/cli/tests/snapshots/run_node_test__handles_unhandled_promise.snap +++ b/crates/cli/tests/snapshots/run_node_test__handles_unhandled_promise.snap @@ -16,9 +16,5 @@ node:internal/process/promises:288 Node.js v18.0.0 Error: task_runner::run_failed - × Task node:unhandledPromise failed to run. View hash details with moon - │ query hash hash1234. + × Task node:unhandledPromise failed to run. ╰─▶ Process node failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_rust_test__handles_process_exit_nonzero.snap b/crates/cli/tests/snapshots/run_rust_test__handles_process_exit_nonzero.snap index 05cb756fb7f..b22771ae657 100644 --- a/crates/cli/tests/snapshots/run_rust_test__handles_process_exit_nonzero.snap +++ b/crates/cli/tests/snapshots/run_rust_test__handles_process_exit_nonzero.snap @@ -9,9 +9,5 @@ stdout stderr Error: task_runner::run_failed - × Task rust:exitNonZero failed to run. View hash details with moon query - │ hash hash1234. + × Task rust:exitNonZero failed to run. ╰─▶ Process cargo failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_system_test__unix__caching__uses_cache_on_subsequent_runs-2.snap b/crates/cli/tests/snapshots/run_system_test__unix__caching__uses_cache_on_subsequent_runs-2.snap index 13e056a09b0..f4e70f1c2e7 100644 --- a/crates/cli/tests/snapshots/run_system_test__unix__caching__uses_cache_on_subsequent_runs-2.snap +++ b/crates/cli/tests/snapshots/run_system_test__unix__caching__uses_cache_on_subsequent_runs-2.snap @@ -2,7 +2,7 @@ source: crates/cli/tests/run_system_test.rs expression: assert.output() --- -▪▪▪▪ unix:outputs (cached from previous run) +▪▪▪▪ unix:outputs (cached, 100ms) Tasks: 1 completed (1 cached) Time: 100ms ❯❯❯❯ to the moon diff --git a/crates/cli/tests/snapshots/run_system_test__unix__handles_process_exit_nonzero.snap b/crates/cli/tests/snapshots/run_system_test__unix__handles_process_exit_nonzero.snap index 2799381604e..84289f27f95 100644 --- a/crates/cli/tests/snapshots/run_system_test__unix__handles_process_exit_nonzero.snap +++ b/crates/cli/tests/snapshots/run_system_test__unix__handles_process_exit_nonzero.snap @@ -8,9 +8,5 @@ stdout stderr Error: task_runner::run_failed - × Task unix:exitNonZero failed to run. View hash details with moon query - │ hash hash1234. + × Task unix:exitNonZero failed to run. ╰─▶ Process bash failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_system_test__unix__handles_process_exit_nonzero_inline.snap b/crates/cli/tests/snapshots/run_system_test__unix__handles_process_exit_nonzero_inline.snap index 1b49e67faee..df7a3854ac3 100644 --- a/crates/cli/tests/snapshots/run_system_test__unix__handles_process_exit_nonzero_inline.snap +++ b/crates/cli/tests/snapshots/run_system_test__unix__handles_process_exit_nonzero_inline.snap @@ -6,9 +6,5 @@ expression: assert.output() ▪▪▪▪ unix:exitNonZeroInline (100ms) Error: task_runner::run_failed - × Task unix:exitNonZeroInline failed to run. View hash details with moon - │ query hash hash1234. + × Task unix:exitNonZeroInline failed to run. ╰─▶ Process bash failed with a 2 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_system_test__unix__retries_on_failure_till_count.snap b/crates/cli/tests/snapshots/run_system_test__unix__retries_on_failure_till_count.snap index f07e9193abd..9b08aae61db 100644 --- a/crates/cli/tests/snapshots/run_system_test__unix__retries_on_failure_till_count.snap +++ b/crates/cli/tests/snapshots/run_system_test__unix__retries_on_failure_till_count.snap @@ -4,25 +4,18 @@ expression: assert.output() --- ▪▪▪▪ unix:retryCount stdout -▪▪▪▪ unix:retryCount (100ms) -▪▪▪▪ unix:retryCount (2/4) +▪▪▪▪ unix:retryCount (attempt 2/4) stdout -▪▪▪▪ unix:retryCount (2/4, 100ms) -▪▪▪▪ unix:retryCount (3/4) +▪▪▪▪ unix:retryCount (attempt 3/4) stdout -▪▪▪▪ unix:retryCount (3/4, 100ms) -▪▪▪▪ unix:retryCount (4/4) +▪▪▪▪ unix:retryCount (attempt 4/4) stdout -▪▪▪▪ unix:retryCount (4/4, 100ms) +▪▪▪▪ unix:retryCount (100ms) stderr stderr stderr stderr Error: task_runner::run_failed - × Task unix:retryCount failed to run. View hash details with moon query - │ hash hash1234. + × Task unix:retryCount failed to run. ╰─▶ Process bash failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_system_test__windows__caching__uses_cache_on_subsequent_runs-2.snap b/crates/cli/tests/snapshots/run_system_test__windows__caching__uses_cache_on_subsequent_runs-2.snap index 212048cc04d..5f77d985a4d 100644 --- a/crates/cli/tests/snapshots/run_system_test__windows__caching__uses_cache_on_subsequent_runs-2.snap +++ b/crates/cli/tests/snapshots/run_system_test__windows__caching__uses_cache_on_subsequent_runs-2.snap @@ -2,7 +2,7 @@ source: crates/cli/tests/run_system_test.rs expression: assert.output() --- -▪▪▪▪ windows:outputs (cached from previous run) +▪▪▪▪ windows:outputs (cached, 100ms) Tasks: 1 completed (1 cached) Time: 100ms ❯❯❯❯ to the moon diff --git a/crates/cli/tests/snapshots/run_system_test__windows__handles_process_exit_nonzero.snap b/crates/cli/tests/snapshots/run_system_test__windows__handles_process_exit_nonzero.snap index d0072f566bf..ff7aa464b09 100644 --- a/crates/cli/tests/snapshots/run_system_test__windows__handles_process_exit_nonzero.snap +++ b/crates/cli/tests/snapshots/run_system_test__windows__handles_process_exit_nonzero.snap @@ -8,9 +8,5 @@ stdout stderr Error: task_runner::run_failed - × Task windows:exitNonZero failed to run. View hash details with moon query - │ hash hash1234. + × Task windows:exitNonZero failed to run. ╰─▶ Process cmd.exe failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_system_test__windows__retries_on_failure_till_count.snap b/crates/cli/tests/snapshots/run_system_test__windows__retries_on_failure_till_count.snap index cb3d0f58109..9c6d45a286a 100644 --- a/crates/cli/tests/snapshots/run_system_test__windows__retries_on_failure_till_count.snap +++ b/crates/cli/tests/snapshots/run_system_test__windows__retries_on_failure_till_count.snap @@ -4,25 +4,18 @@ expression: assert.output() --- ▪▪▪▪ windows:retryCount stdout -▪▪▪▪ windows:retryCount (100ms) -▪▪▪▪ windows:retryCount (2/4) +▪▪▪▪ windows:retryCount (attempt 2/4) stdout -▪▪▪▪ windows:retryCount (2/4, 100ms) -▪▪▪▪ windows:retryCount (3/4) +▪▪▪▪ windows:retryCount (attempt 3/4) stdout -▪▪▪▪ windows:retryCount (3/4, 100ms) -▪▪▪▪ windows:retryCount (4/4) +▪▪▪▪ windows:retryCount (attempt 4/4) stdout -▪▪▪▪ windows:retryCount (4/4, 100ms) +▪▪▪▪ windows:retryCount (100ms) stderr stderr stderr stderr Error: task_runner::run_failed - × Task windows:retryCount failed to run. View hash details with moon query - │ hash hash1234. + × Task windows:retryCount failed to run. ╰─▶ Process cmd.exe failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_test__dependencies__can_run_deps_in_serial.snap b/crates/cli/tests/snapshots/run_test__dependencies__can_run_deps_in_serial.snap index 05b6cd78202..8edbf0ef2a7 100644 --- a/crates/cli/tests/snapshots/run_test__dependencies__can_run_deps_in_serial.snap +++ b/crates/cli/tests/snapshots/run_test__dependencies__can_run_deps_in_serial.snap @@ -1,7 +1,6 @@ --- source: crates/cli/tests/run_test.rs -assertion_line: 224 -expression: get_assert_output(&assert) +expression: assert.output() --- ▪▪▪▪ npm install ▪▪▪▪ depsB:standard (no op) @@ -10,7 +9,4 @@ expression: get_assert_output(&assert) ▪▪▪▪ dependsOn:serialDeps (no op) Tasks: 4 completed - Time: 100ms - - - + Time: 100ms ❯❯❯❯ to the moon diff --git a/crates/cli/tests/snapshots/run_test__dependencies__changes_primary_hash_if_deps_hash_changes.snap b/crates/cli/tests/snapshots/run_test__dependencies__changes_primary_hash_if_deps_hash_changes.snap index 4f8e40dca5d..fd1c9e03555 100644 --- a/crates/cli/tests/snapshots/run_test__dependencies__changes_primary_hash_if_deps_hash_changes.snap +++ b/crates/cli/tests/snapshots/run_test__dependencies__changes_primary_hash_if_deps_hash_changes.snap @@ -3,8 +3,8 @@ source: crates/cli/tests/run_test.rs expression: "[h1, h2, extract_hash_from_run(sandbox.path(), \"outputs:asDep\"),\n extract_hash_from_run(sandbox.path(), \"outputs:withDeps\")]" --- [ - "5bbce721f289c545e29a23c475b4391510bca100f60829ec00566899632f3b68", - "9240d7fb59a8aa40f4eccd09295838d4f5d3bb5fe69a3b0c926cdb02f0280650", - "84bd6004ef5c203fc1041c6f95224eae9bd350f9254e425625d618d81a27fd94", - "63773b7f9e87ddc842f39000b18bed6387e4bb576fdc9d4b567efe412a0e8fca", + "9bf5c3c4d2f77edd4efa699d28ab7e7fe7347d0d6f10c70991e8961ed1bbd636", + "fde0f3c2f67dc442479c5030c717477fb0c61e5dcf0042b8f7addf1b41eb6531", + "68656310c668ee2e6c01462174fda0f8b6021fea88cd2f9d8caa3fafe795760d", + "3650eceb7df79b209d36e91033cb7c2c984dce31b1d129de0eeefab0b8b6baaf", ] diff --git a/crates/cli/tests/snapshots/run_test__dependencies__generates_unique_hashes_for_each_target.snap b/crates/cli/tests/snapshots/run_test__dependencies__generates_unique_hashes_for_each_target.snap index c8960911088..dbc6cf484f0 100644 --- a/crates/cli/tests/snapshots/run_test__dependencies__generates_unique_hashes_for_each_target.snap +++ b/crates/cli/tests/snapshots/run_test__dependencies__generates_unique_hashes_for_each_target.snap @@ -3,6 +3,6 @@ source: crates/cli/tests/run_test.rs expression: "[extract_hash_from_run(sandbox.path(), \"outputs:asDep\"),\n extract_hash_from_run(sandbox.path(), \"outputs:withDeps\")]" --- [ - "5bbce721f289c545e29a23c475b4391510bca100f60829ec00566899632f3b68", - "9240d7fb59a8aa40f4eccd09295838d4f5d3bb5fe69a3b0c926cdb02f0280650", + "9bf5c3c4d2f77edd4efa699d28ab7e7fe7347d0d6f10c70991e8961ed1bbd636", + "fde0f3c2f67dc442479c5030c717477fb0c61e5dcf0042b8f7addf1b41eb6531", ] diff --git a/crates/cli/tests/snapshots/run_test__dependencies__runs_the_graph_in_order.snap b/crates/cli/tests/snapshots/run_test__dependencies__runs_the_graph_in_order.snap index 46588710807..fd260ef9f08 100644 --- a/crates/cli/tests/snapshots/run_test__dependencies__runs_the_graph_in_order.snap +++ b/crates/cli/tests/snapshots/run_test__dependencies__runs_the_graph_in_order.snap @@ -1,7 +1,6 @@ --- source: crates/cli/tests/run_test.rs -assertion_line: 200 -expression: get_assert_output(&assert) +expression: assert.output() --- ▪▪▪▪ npm install ▪▪▪▪ depsC:dependencyOrder (no op) @@ -9,7 +8,4 @@ expression: get_assert_output(&assert) ▪▪▪▪ depsA:dependencyOrder (no op) Tasks: 3 completed - Time: 100ms - - - + Time: 100ms ❯❯❯❯ to the moon diff --git a/crates/cli/tests/snapshots/run_test__dependencies__runs_the_graph_in_order_not_from_head.snap b/crates/cli/tests/snapshots/run_test__dependencies__runs_the_graph_in_order_not_from_head.snap index 521c49fb791..170f21c29aa 100644 --- a/crates/cli/tests/snapshots/run_test__dependencies__runs_the_graph_in_order_not_from_head.snap +++ b/crates/cli/tests/snapshots/run_test__dependencies__runs_the_graph_in_order_not_from_head.snap @@ -1,14 +1,10 @@ --- source: crates/cli/tests/run_test.rs -assertion_line: 212 -expression: get_assert_output(&assert) +expression: assert.output() --- ▪▪▪▪ npm install ▪▪▪▪ depsC:dependencyOrder (no op) ▪▪▪▪ depsB:dependencyOrder (no op) Tasks: 2 completed - Time: 100ms - - - + Time: 100ms ❯❯❯❯ to the moon diff --git a/crates/cli/tests/snapshots/run_test__noop__runs_noop.snap b/crates/cli/tests/snapshots/run_test__noop__runs_noop.snap index 07d7ff28de8..2d2c84e5a4c 100644 --- a/crates/cli/tests/snapshots/run_test__noop__runs_noop.snap +++ b/crates/cli/tests/snapshots/run_test__noop__runs_noop.snap @@ -1,13 +1,9 @@ --- source: crates/cli/tests/run_test.rs -assertion_line: 922 -expression: get_assert_output(&assert) +expression: assert.output() --- ▪▪▪▪ npm install ▪▪▪▪ noop:noop (no op) Tasks: 1 completed - Time: 100ms - - - + Time: 100ms ❯❯❯❯ to the moon diff --git a/crates/cli/tests/snapshots/run_test__output_styles__buffer.snap b/crates/cli/tests/snapshots/run_test__output_styles__buffer.snap index 36fa947b364..217887c6d37 100644 --- a/crates/cli/tests/snapshots/run_test__output_styles__buffer.snap +++ b/crates/cli/tests/snapshots/run_test__output_styles__buffer.snap @@ -4,15 +4,11 @@ expression: assert.output() --- ▪▪▪▪ npm install ▪▪▪▪ outputStyles:buffer -▪▪▪▪ outputStyles:buffer (100ms) stdout - +▪▪▪▪ outputStyles:buffer (100ms) ▪▪▪▪ outputStyles:bufferPrimary (no op) Tasks: 2 completed Time: 100ms stderr - - - diff --git a/crates/cli/tests/snapshots/run_test__output_styles__buffer_on_failure_when_failure.snap b/crates/cli/tests/snapshots/run_test__output_styles__buffer_on_failure_when_failure.snap index 3d4d222f920..e907e792215 100644 --- a/crates/cli/tests/snapshots/run_test__output_styles__buffer_on_failure_when_failure.snap +++ b/crates/cli/tests/snapshots/run_test__output_styles__buffer_on_failure_when_failure.snap @@ -4,16 +4,10 @@ expression: assert.output() --- ▪▪▪▪ npm install ▪▪▪▪ outputStyles:bufferFailureFail -▪▪▪▪ outputStyles:bufferFailureFail (100ms) stdout - +▪▪▪▪ outputStyles:bufferFailureFail (100ms) stderr - Error: task_runner::run_failed - × Task outputStyles:bufferFailureFail failed to run. View hash details with - │ moon query hash hash1234. + × Task outputStyles:bufferFailureFail failed to run. ╰─▶ Process node failed with a 1 exit code. - - - diff --git a/crates/cli/tests/snapshots/run_test__output_styles__hash.snap b/crates/cli/tests/snapshots/run_test__output_styles__hash.snap index 423d8ffc96a..451b162ab01 100644 --- a/crates/cli/tests/snapshots/run_test__output_styles__hash.snap +++ b/crates/cli/tests/snapshots/run_test__output_styles__hash.snap @@ -10,4 +10,4 @@ expression: assert.output() Tasks: 2 completed Time: 100ms -21f0e3dd03e9600145bbdbe635cefc1c685f1b20e5be118c9f19838cdb9c3782 +66625948ca3703a7a080dbd1c7526253dca92b878e2c1fbaabd5ed17b293b2df diff --git a/crates/cli/tests/snapshots/run_test__outputs__hydration__reuses_cache_from_previous_run-2.snap b/crates/cli/tests/snapshots/run_test__outputs__hydration__reuses_cache_from_previous_run-2.snap index ed20df13f28..d2d14a794ad 100644 --- a/crates/cli/tests/snapshots/run_test__outputs__hydration__reuses_cache_from_previous_run-2.snap +++ b/crates/cli/tests/snapshots/run_test__outputs__hydration__reuses_cache_from_previous_run-2.snap @@ -3,10 +3,7 @@ source: crates/cli/tests/run_test.rs expression: assert2.output() --- ▪▪▪▪ npm install -▪▪▪▪ outputs:generateFileAndFolder (cached from previous run) +▪▪▪▪ outputs:generateFileAndFolder (cached, 100ms) Tasks: 1 completed (1 cached) Time: 100ms ❯❯❯❯ to the moon - - - diff --git a/crates/core/action-pipeline/Cargo.toml b/crates/core/action-pipeline/Cargo.toml index 483802c7798..7d7c77795ea 100644 --- a/crates/core/action-pipeline/Cargo.toml +++ b/crates/core/action-pipeline/Cargo.toml @@ -21,8 +21,8 @@ moon_platform = { path = "../platform" } moon_process = { path = "../../../nextgen/process" } moon_project = { path = "../../../nextgen/project" } moon_project_graph = { path = "../../../nextgen/project-graph" } -moon_runner = { path = "../runner" } moon_target = { path = "../../../nextgen/target" } +moon_task_runner = { path = "../../../nextgen/task-runner" } moon_tool = { path = "../tool" } moon_utils = { path = "../utils" } moon_workspace = { path = "../../../nextgen/workspace" } @@ -42,3 +42,6 @@ tokio-util = { workspace = true } [dev-dependencies] moon = { path = "../moon" } moon_test_utils = { path = "../test-utils" } + +[lints] +workspace = true diff --git a/crates/core/action-pipeline/src/actions/run_task.rs b/crates/core/action-pipeline/src/actions/run_task.rs index b097f4d0c21..714ad60f3cf 100644 --- a/crates/core/action-pipeline/src/actions/run_task.rs +++ b/crates/core/action-pipeline/src/actions/run_task.rs @@ -1,12 +1,12 @@ use moon_action::{Action, ActionStatus}; -use moon_action_context::{ActionContext, TargetState}; -use moon_console::{Checkpoint, Console}; +use moon_action_context::ActionContext; +use moon_console::Console; use moon_emitter::Emitter; -use moon_logger::{debug, warn}; +use moon_logger::warn; use moon_platform::Runtime; use moon_project::Project; -use moon_runner::Runner; use moon_target::Target; +use moon_task_runner::TaskRunner; use moon_workspace::Workspace; use starbase_styles::color; use std::env; @@ -18,126 +18,32 @@ const LOG_TARGET: &str = "moon:action:run-task"; pub async fn run_task( action: &mut Action, context: Arc, - emitter: Arc, + _emitter: Arc, workspace: Arc, console: Arc, project: &Project, target: &Target, - runtime: &Runtime, + _runtime: &Runtime, ) -> miette::Result { env::set_var("MOON_RUNNING_ACTION", "run-task"); let task = project.get_task(&target.task_id)?; - let mut runner = Runner::new(&emitter, &workspace, project, task, console)?; - debug!( - target: LOG_TARGET, - "Running task {}", - color::label(&task.target) - ); - - runner.node = Arc::clone(&action.node); action.allow_failure = task.options.allow_failure; - // If a dependency failed, we should skip this target - if !task.deps.is_empty() { - for dep in &task.deps { - if let Some(dep_state) = context.target_states.get(&dep.target) { - if !dep_state.get().is_complete() { - context.set_target_state(target, TargetState::Skipped); - - debug!( - target: LOG_TARGET, - "Dependency {} of {} has failed or has been skipped, skipping this target", - color::label(&dep.target), - color::label(&task.target) - ); - - runner.print_checkpoint(Checkpoint::RunFailed, ["skipped".to_owned()])?; - - return Ok(ActionStatus::Skipped); - } - } - } - } + let result = TaskRunner::new(&workspace, project, task, console)? + .run(&context, &action.node) + .await?; - // If the VCS root does not exist (like in a Docker container), - // we should avoid failing and simply disable caching. - let is_cache_enabled = task.options.cache && workspace.vcs.is_enabled(); + action.set_operations(result.operations, &task.command); - // Abort early if this build has already been cached/hashed - if is_cache_enabled { - if let Some(cache_location) = runner.is_cached(&context, runtime).await? { - return runner.hydrate(cache_location).await; - } - } else { - debug!( + if action.has_failed() && action.allow_failure { + warn!( target: LOG_TARGET, - "Cache disabled for target {}", + "Task {} has failed, but is marked to allow failures, continuing pipeline", color::label(&task.target), ); - - // We must give this task a fake hash for it to be considered complete - // for other tasks! This case triggers for noop or cache disabled tasks. - context.set_target_state(target, TargetState::Passthrough); } - let attempts_result = { - if let Some(mutex_name) = &task.options.mutex { - debug!( - target: LOG_TARGET, - "Waiting to acquire {} mutex lock for {} before running", - color::id(mutex_name), - color::label(&task.target), - ); - - let mutex = context.get_or_create_mutex(mutex_name); - let _guard = mutex.lock().await; - - debug!( - target: LOG_TARGET, - "Acquired {} mutex lock for {}", - color::id(mutex_name), - color::label(&task.target), - ); - - // This is required within this block so that the guard - // above isn't immediately dropped! - runner.create_and_run_command(&context, runtime).await - } else { - runner.create_and_run_command(&context, runtime).await - } - }; - - match attempts_result { - Ok(attempts) => { - let status = if action.set_attempts(attempts, &task.command) { - ActionStatus::Passed - } else { - context.set_target_state(target, TargetState::Failed); - - if action.allow_failure { - warn!( - target: LOG_TARGET, - "Target {} has failed, but is marked to allow failures, continuing pipeline", - color::label(&task.target), - ); - } - - ActionStatus::Failed - }; - - // If successful, cache the task outputs - if is_cache_enabled { - runner.archive_outputs().await?; - } - - Ok(status) - } - Err(err) => { - context.set_target_state(target, TargetState::Failed); - - Err(err) - } - } + Ok(action.status) } diff --git a/crates/core/action-pipeline/src/lib.rs b/crates/core/action-pipeline/src/lib.rs index 9e4caf69773..805c3adae38 100644 --- a/crates/core/action-pipeline/src/lib.rs +++ b/crates/core/action-pipeline/src/lib.rs @@ -9,21 +9,3 @@ mod subscribers; pub use errors::*; pub use moon_action_context::*; pub use pipeline::*; - -pub(crate) fn label_to_the_moon() -> String { - use starbase_styles::color::paint; - - [ - paint(55, "❯"), - paint(56, "❯❯"), - paint(57, "❯ t"), - paint(63, "o t"), - paint(69, "he "), - paint(75, "mo"), - paint(81, "on"), - ] - .iter() - .map(|i| i.to_string()) - .collect::>() - .join("") -} diff --git a/crates/core/action-pipeline/src/pipeline.rs b/crates/core/action-pipeline/src/pipeline.rs index c527b0eda41..463b8cd41f4 100644 --- a/crates/core/action-pipeline/src/pipeline.rs +++ b/crates/core/action-pipeline/src/pipeline.rs @@ -4,17 +4,16 @@ use crate::processor::process_action; use crate::run_report::RunReport; use crate::subscribers::local_cache::LocalCacheSubscriber; use crate::subscribers::moonbase::MoonbaseSubscriber; -use moon_action::{Action, ActionNode, ActionStatus}; +use moon_action::Action; use moon_action_context::ActionContext; use moon_action_graph::ActionGraph; -use moon_console::{Checkpoint, Console}; +use moon_console::{Console, PipelineReportItem}; use moon_emitter::{Emitter, Event}; use moon_logger::{debug, error, trace, warn}; use moon_notifier::WebhooksSubscriber; use moon_project_graph::ProjectGraph; -use moon_utils::{is_ci, is_test_env, time}; +use moon_utils::{is_ci, is_test_env}; use moon_workspace::Workspace; -use starbase_styles::color; use std::mem; use std::sync::Arc; use std::time::{Duration, Instant}; @@ -27,6 +26,8 @@ const LOG_TARGET: &str = "moon:action-pipeline"; pub type ActionResults = Vec; pub struct Pipeline { + aborted: bool, + bail: bool, concurrency: Option, @@ -37,17 +38,24 @@ pub struct Pipeline { report_name: Option, + results: Vec, + + summarize: bool, + workspace: Arc, } impl Pipeline { pub fn new(workspace: Workspace, project_graph: ProjectGraph) -> Self { Pipeline { + aborted: false, bail: false, concurrency: None, duration: None, project_graph: Arc::new(project_graph), report_name: None, + results: vec![], + summarize: false, workspace: Arc::new(workspace), } } @@ -62,6 +70,11 @@ impl Pipeline { self } + pub fn summarize(&mut self, value: bool) -> &mut Self { + self.summarize = value; + self + } + pub fn generate_report(&mut self, name: &str) -> &mut Self { self.report_name = Some(name.to_owned()); self @@ -73,6 +86,47 @@ impl Pipeline { console: Arc, context: Option, ) -> miette::Result { + let result = self + .run_internal(action_graph, console.clone(), context) + .await; + + let actions = mem::take(&mut self.results); + + let item = PipelineReportItem { + duration: self.duration, + summarize: self.summarize, + }; + + match result { + Ok(_) => { + console + .reporter + .on_pipeline_completed(&actions, &item, None)?; + + Ok(actions) + } + Err(error) => { + if self.aborted { + console + .reporter + .on_pipeline_aborted(&actions, &item, Some(&error))?; + } else { + console + .reporter + .on_pipeline_completed(&actions, &item, Some(&error))?; + } + + Err(error) + } + } + } + + pub async fn run_internal( + &mut self, + action_graph: ActionGraph, + console: Arc, + context: Option, + ) -> miette::Result<()> { let start = Instant::now(); let context = Arc::new(context.unwrap_or_default()); let emitter = Arc::new(create_emitter(Arc::clone(&self.workspace)).await); @@ -94,6 +148,10 @@ impl Pipeline { }) .await?; + console + .reporter + .on_pipeline_started(&action_graph.get_nodes())?; + // Launch a separate thread to listen for ctrl+c let cancel_token = CancellationToken::new(); let ctrl_c_token = cancel_token.clone(); @@ -109,7 +167,6 @@ impl Pipeline { self.concurrency.unwrap_or_else(num_cpus::get), )); - let mut results: ActionResults = vec![]; let mut action_handles = vec![]; let mut persistent_nodes = vec![]; let mut action_graph_iter = action_graph.try_iter()?; @@ -120,7 +177,7 @@ impl Pipeline { let Some(node_index) = action_graph_iter.next() else { // Nothing new to run since they're waiting on currently // running actions, so exhaust the current list - self.run_handles(mem::take(&mut action_handles), &mut results, &emitter) + self.run_handles(mem::take(&mut action_handles), &emitter) .await?; continue; @@ -196,7 +253,7 @@ impl Pipeline { // Run this in isolation by exhausting the current list of handles if node.is_interactive() || semaphore.available_permits() == 0 { - self.run_handles(mem::take(&mut action_handles), &mut results, &emitter) + self.run_handles(mem::take(&mut action_handles), &emitter) .await?; } } @@ -257,17 +314,16 @@ impl Pipeline { } // Run any remaining actions - self.run_handles(action_handles, &mut results, &emitter) - .await?; + self.run_handles(action_handles, &emitter).await?; let duration = start.elapsed(); - let estimate = Estimator::calculate(&results, duration); + let estimate = Estimator::calculate(&self.results, duration); let context = Arc::into_inner(context).unwrap(); let mut passed_count = 0; let mut cached_count = 0; let mut failed_count = 0; - for result in &results { + for result in &self.results { if result.has_failed() { failed_count += 1; } else if result.was_cached() { @@ -295,15 +351,15 @@ impl Pipeline { .await?; self.duration = Some(duration); - self.create_run_report(&results, &context, estimate).await?; + self.create_run_report(&self.results, &context, estimate) + .await?; - Ok(results) + Ok(()) } async fn run_handles( - &self, + &mut self, handles: Vec>>, - results: &mut ActionResults, emitter: &Emitter, ) -> miette::Result<()> { let mut abort_error: Option = None; @@ -322,7 +378,7 @@ impl Pipeline { if self.bail && result.should_bail() || result.should_abort() { abort_error = Some(result.get_error()); } else { - results.push(result); + self.results.push(result); } } Ok(Err(error)) => { @@ -336,6 +392,8 @@ impl Pipeline { } if let Some(abort_error) = abort_error { + self.aborted = true; + if show_abort_log { error!("Encountered a critical error, aborting the action pipeline"); } @@ -352,200 +410,6 @@ impl Pipeline { Ok(()) } - pub fn render_summary(&self, results: &ActionResults, console: &Console) -> miette::Result<()> { - console.out.write_newline()?; - - let mut count = 0; - - for result in results { - if !result.has_failed() { - continue; - } - - console.out.print_checkpoint( - Checkpoint::RunFailed, - match &*result.node { - ActionNode::RunTask(inner) => inner.target.as_str(), - _ => &result.label, - }, - )?; - - if let Some(attempts) = &result.attempts { - if let Some(attempt) = attempts.iter().find(|a| a.has_failed()) { - let mut has_stdout = false; - - if let Some(stdout) = &attempt.stdout { - if !stdout.is_empty() { - has_stdout = true; - console.out.write_line(stdout)?; - } - } - - if let Some(stderr) = &attempt.stderr { - if has_stdout { - console.out.write_newline()?; - } - - if !stderr.is_empty() { - console.out.write_line(stderr)?; - } - } - } - } - - console.out.write_newline()?; - count += 1; - } - - if count == 0 { - console.out.write_line("No failed actions to summarize.")?; - } - - console.out.write_newline()?; - - Ok(()) - } - - pub fn render_results( - &self, - results: &ActionResults, - console: &Console, - ) -> miette::Result { - console.out.write_newline()?; - - let mut failed = false; - - for result in results { - let status = match result.status { - ActionStatus::Passed | ActionStatus::Cached | ActionStatus::CachedFromRemote => { - color::success("pass") - } - ActionStatus::Failed | ActionStatus::FailedAndAbort => { - if !result.allow_failure { - failed = true; - } - - color::failure("fail") - } - ActionStatus::Invalid => color::invalid("warn"), - ActionStatus::Skipped => color::muted_light("skip"), - _ => color::muted_light("oops"), - }; - - let mut meta: Vec = vec![]; - - if matches!( - result.status, - ActionStatus::Cached | ActionStatus::CachedFromRemote - ) { - meta.push(String::from("cached")); - } else if matches!(result.status, ActionStatus::Skipped) { - meta.push(String::from("skipped")); - } else if let Some(duration) = result.duration { - meta.push(time::elapsed(duration)); - } - - console.out.write_line(format!( - "{} {} {}", - status, - result.label, - console.out.format_comments(meta), - ))?; - } - - console.out.write_newline()?; - - Ok(failed) - } - - pub fn render_stats( - &self, - results: &ActionResults, - console: &Console, - compact: bool, - ) -> miette::Result<()> { - if console.out.is_quiet() { - return Ok(()); - } - - let mut cached_count = 0; - let mut pass_count = 0; - let mut fail_count = 0; - let mut invalid_count = 0; - let mut skipped_count = 0; - - for result in results { - if compact && !matches!(*result.node, ActionNode::RunTask { .. }) { - continue; - } - - match result.status { - ActionStatus::Cached | ActionStatus::CachedFromRemote => { - cached_count += 1; - pass_count += 1; - } - ActionStatus::Passed => { - pass_count += 1; - } - ActionStatus::Failed | ActionStatus::FailedAndAbort => { - fail_count += 1; - } - ActionStatus::Invalid => { - invalid_count += 1; - } - ActionStatus::Skipped => { - skipped_count += 1; - } - _ => {} - } - } - - let mut counts_message = vec![]; - - if pass_count > 0 { - if cached_count > 0 { - counts_message.push(color::success(format!( - "{pass_count} completed ({cached_count} cached)" - ))); - } else { - counts_message.push(color::success(format!("{pass_count} completed"))); - } - } - - if fail_count > 0 { - counts_message.push(color::failure(format!("{fail_count} failed"))); - } - - if invalid_count > 0 { - counts_message.push(color::invalid(format!("{invalid_count} invalid"))); - } - - if skipped_count > 0 { - counts_message.push(color::muted_light(format!("{skipped_count} skipped"))); - } - - console.out.write_newline()?; - - let counts_message = counts_message.join(&color::muted(", ")); - let mut elapsed_time = time::elapsed(self.duration.unwrap()); - - if pass_count == cached_count && fail_count == 0 { - elapsed_time = format!("{} {}", elapsed_time, crate::label_to_the_moon()); - } - - if compact { - console.out.print_entry("Tasks", &counts_message)?; - console.out.print_entry(" Time", &elapsed_time)?; - } else { - console.out.print_entry("Actions", &counts_message)?; - console.out.print_entry(" Time", &elapsed_time)?; - } - - console.out.write_newline()?; - - Ok(()) - } - async fn create_run_report( &self, actions: &ActionResults, diff --git a/crates/core/action-pipeline/src/processor.rs b/crates/core/action-pipeline/src/processor.rs index 32c4aa66e95..7e183325a1a 100644 --- a/crates/core/action-pipeline/src/processor.rs +++ b/crates/core/action-pipeline/src/processor.rs @@ -42,6 +42,8 @@ pub async fn process_action( }) .await?; + console.reporter.on_action_started(&action)?; + let result = match &*node { ActionNode::None => Ok(ActionStatus::Skipped), @@ -172,6 +174,7 @@ pub async fn process_action( emitter .emit(Event::TargetRunning { + action: &action, target: &inner.target, }) .await?; @@ -181,7 +184,7 @@ pub async fn process_action( context, Arc::clone(&emitter), workspace, - console, + Arc::clone(&console), &project, &inner.target, &inner.runtime, @@ -190,6 +193,7 @@ pub async fn process_action( emitter .emit(Event::TargetRan { + action: &action, error: extract_error(&run_result), target: &inner.target, }) @@ -204,8 +208,16 @@ pub async fn process_action( match result { Ok(status) => { action.finish(status); + + console.reporter.on_action_completed(&action, None)?; } Err(error) => { + action.finish(ActionStatus::Failed); + + console + .reporter + .on_action_completed(&action, Some(&error))?; + action.fail(error); } }; diff --git a/crates/core/action-pipeline/src/subscribers/local_cache.rs b/crates/core/action-pipeline/src/subscribers/local_cache.rs index dc9e6bff57b..37ffdd726b7 100644 --- a/crates/core/action-pipeline/src/subscribers/local_cache.rs +++ b/crates/core/action-pipeline/src/subscribers/local_cache.rs @@ -1,16 +1,7 @@ -use moon_cache_item::get_cache_mode; use moon_emitter::{Event, EventFlow, Subscriber}; -use moon_runner::{archive_outputs, hydrate_outputs}; -use moon_utils::{async_trait, path}; +use moon_utils::async_trait; use moon_workspace::Workspace; -/// The local cache subscriber is in charge of managing archives -/// (task output's archived as tarballs), by reading and writing them -/// to the `.moon/cache/{outputs,hashes}` directories. -/// -/// This is the last subscriber amongst all subscribers, as local -/// cache is the last line of defense. However, other subscribers -/// will piggyback off of it, like remote cache. pub struct LocalCacheSubscriber {} impl LocalCacheSubscriber { @@ -26,66 +17,13 @@ impl Subscriber for LocalCacheSubscriber { event: &Event<'e>, workspace: &Workspace, ) -> miette::Result { - match event { - // Check to see if a build with the provided hash has been cached locally. - // We only check for the archive, as the manifest is purely for local debugging! - Event::TargetOutputCacheCheck { hash, .. } => { - if get_cache_mode().is_readable() - && workspace.cache_engine.hash.get_archive_path(hash).exists() - { - return Ok(EventFlow::Return("local-cache".into())); - } + // After the run has finished, clean any stale archives. + if let Event::PipelineFinished { .. } = event { + if workspace.config.runner.auto_clean_cache { + workspace + .cache_engine + .clean_stale_cache(&workspace.config.runner.cache_lifetime, false)?; } - - // Archive the task's outputs into the local cache. - Event::TargetOutputArchiving { - hash, - project, - task, - .. - } => { - let state_dir = workspace.cache_engine.state.get_target_dir(&task.target); - let archive_path = workspace.cache_engine.hash.get_archive_path(hash); - let output_paths = task - .outputs - .iter() - .filter_map(|o| o.to_workspace_relative(&project.source)) - .collect::>(); - - if archive_outputs(&state_dir, &archive_path, &workspace.root, &output_paths)? { - return Ok(EventFlow::Return(path::to_string(archive_path)?)); - } - } - - // Hydrate the cached archive into the task's outputs. - Event::TargetOutputHydrating { - hash, - project, - task, - .. - } => { - let state_dir = workspace.cache_engine.state.get_target_dir(&task.target); - let archive_path = workspace.cache_engine.hash.get_archive_path(hash); - let output_paths = task - .outputs - .iter() - .filter_map(|o| o.to_workspace_relative(&project.source)) - .collect::>(); - - if hydrate_outputs(&state_dir, &archive_path, &workspace.root, &output_paths)? { - return Ok(EventFlow::Return(path::to_string(archive_path)?)); - } - } - - // After the run has finished, clean any stale archives. - Event::PipelineFinished { .. } => { - if workspace.config.runner.auto_clean_cache { - workspace - .cache_engine - .clean_stale_cache(&workspace.config.runner.cache_lifetime, false)?; - } - } - _ => {} } Ok(EventFlow::Continue) diff --git a/crates/core/action-pipeline/src/subscribers/moonbase.rs b/crates/core/action-pipeline/src/subscribers/moonbase.rs index 5b9b204ad6b..6a875af46d0 100644 --- a/crates/core/action-pipeline/src/subscribers/moonbase.rs +++ b/crates/core/action-pipeline/src/subscribers/moonbase.rs @@ -1,30 +1,22 @@ use ci_env::get_environment; -use moon_action::{ActionNode, ActionStatus, RunTaskNode}; -use moon_api::{ - endpoints::ArtifactWriteInput, - graphql::{ - self, add_job_to_run, create_run, update_job, update_run, AddJobToRun, CreateRun, - GraphQLQuery, UpdateJob, UpdateRun, - }, +use moon_action::{ActionStatus, OperationType}; +use moon_api::graphql::{ + self, add_job_to_run, create_run, update_job, update_run, AddJobToRun, CreateRun, GraphQLQuery, + UpdateJob, UpdateRun, }; -use moon_cache_item::get_cache_mode; use moon_common::is_ci; use moon_emitter::{Event, EventFlow, Subscriber}; -use moon_logger::{debug, error, map_list, trace, warn}; -use moon_platform::Runtime; +use moon_logger::{debug, error, map_list, warn}; use moon_utils::async_trait; use moon_workspace::Workspace; use rustc_hash::FxHashMap; use starbase_styles::color; -use starbase_utils::fs; -use std::{env, sync::Arc}; +use std::env; use tokio::task::JoinHandle; const LOG_TARGET: &str = "moonbase"; pub struct MoonbaseSubscriber { - download_urls: FxHashMap>, - // Mapping of actions to job IDs job_ids: FxHashMap, @@ -38,7 +30,6 @@ pub struct MoonbaseSubscriber { impl MoonbaseSubscriber { pub fn new() -> Self { MoonbaseSubscriber { - download_urls: FxHashMap::default(), job_ids: FxHashMap::default(), run_id: None, requests: vec![], @@ -329,7 +320,13 @@ impl Subscriber for MoonbaseSubscriber { status: Some(map_status(&action.status)), }; - if let Some(attempts) = &action.attempts { + let attempts = action + .operations + .iter() + .filter(|op| matches!(op.type_of, OperationType::TaskExecution)) + .collect::>(); + + if !attempts.is_empty() { input.attempts = Some( attempts .iter() @@ -379,143 +376,14 @@ impl Subscriber for MoonbaseSubscriber { } } - _ => {} - } - } - - // REMOTE CACHING - - if moonbase.remote_caching_enabled { - // We don't want errors to bubble up and crash the program, - // so instead, we log the error (as a warning) to the console! - fn log_failure(error: miette::Report) { - warn!( - target: LOG_TARGET, - "Remote caching failure: {}", - error.to_string() - ); - } - - match event { - // Check if archive exists in moonbase (the remote) by querying the artifacts - // endpoint. This only checks that the database record exists! - Event::TargetOutputCacheCheck { hash, .. } => { - if get_cache_mode().is_readable() { - match moonbase.read_artifact(hash).await { - Ok(Some((artifact, presigned_url))) => { - self.download_urls.insert(artifact.hash, presigned_url); - - return Ok(EventFlow::Return("remote-cache".into())); - } - Ok(None) => { - // Not remote cached - } - Err(error) => { - log_failure(error); - - // Fallthrough and check local cache - } - } - } - } - - // The local cache subscriber uses the `TargetOutputArchiving` event to create - // the tarball. This runs *after* it's been created so that we can upload it. - Event::TargetOutputArchived { - archive_path, - hash, - target, - .. - } => { - if get_cache_mode().is_writable() && archive_path.exists() { - let size = match fs::metadata(archive_path) { - Ok(meta) => meta.len(), - Err(_) => 0, - }; - - // Create the database record - match moonbase - .write_artifact( - hash, - ArtifactWriteInput { - target: target.id.to_owned(), - size: size as usize, - }, - ) + Event::TargetRunning { action, target } => { + // Temporary, pass this data to the moonbase instance + if let Some(job_id) = self.job_ids.get(&action.label) { + moonbase + .job_ids + .write() .await - { - // Upload to cloud storage - Ok((_, presigned_url)) => { - trace!( - target: LOG_TARGET, - "Uploading artifact {} ({} bytes) to remote cache", - color::file(hash), - if size == 0 { - "unknown".to_owned() - } else { - size.to_string() - } - ); - - let hash = (*hash).to_owned(); - let archive_path = archive_path.to_owned(); - - // Create a fake action label so that we can check the CI cache - let action_label = ActionNode::run_task(RunTaskNode::new( - (*target).to_owned(), - Runtime::system(), - )) - .label(); - let job_id = self.job_ids.get(&action_label).cloned(); - - // Run this in the background so we don't slow down the pipeline - // while waiting for very large archives to upload - let moonbase = Arc::clone(moonbase); - - self.requests.push(tokio::spawn(async move { - if let Err(error) = moonbase - .upload_artifact(hash, archive_path, presigned_url, job_id) - .await - { - log_failure(error); - } else { - trace!( - target: LOG_TARGET, - "Artifact upload successful!", - ); - } - })); - } - Err(error) => { - log_failure(error); - } - } - } - } - - // Attempt to download the artifact from the remote cache to `.moon/outputs/`. - // This runs *before* the local cache. So if the download is successful, abort - // the event flow, otherwise continue and let local cache attempt to hydrate. - Event::TargetOutputHydrating { hash, .. } => { - if get_cache_mode().is_readable() { - if let Some(download_url) = self.download_urls.get(*hash) { - let archive_file = workspace.cache_engine.hash.get_archive_path(hash); - - trace!( - target: LOG_TARGET, - "Downloading artifact {} from remote cache", - color::file(hash), - ); - - if let Err(error) = moonbase - .download_artifact(hash, &archive_file, download_url) - .await - { - log_failure(error); - } - - // Fallthrough to local cache to handle the actual hydration - } + .insert(target.to_string(), *job_id); } } @@ -528,6 +396,8 @@ impl Subscriber for MoonbaseSubscriber { for future in self.requests.drain(0..) { let _ = future.await; } + + moonbase.wait_for_requests().await; } Ok(EventFlow::Continue) diff --git a/crates/core/actions/Cargo.toml b/crates/core/actions/Cargo.toml index c6e28d5125e..a6e9cb62af1 100644 --- a/crates/core/actions/Cargo.toml +++ b/crates/core/actions/Cargo.toml @@ -15,3 +15,6 @@ moon_vcs_hooks = { path = "../../../nextgen/vcs-hooks" } miette = { workspace = true } proto_core = { workspace = true } tracing = { workspace = true } + +[lints] +workspace = true diff --git a/crates/core/emitter/Cargo.toml b/crates/core/emitter/Cargo.toml index e90a389c41c..90cd27ad0db 100644 --- a/crates/core/emitter/Cargo.toml +++ b/crates/core/emitter/Cargo.toml @@ -20,3 +20,6 @@ tokio = { workspace = true } [dev-dependencies] moon_test_utils = { path = "../test-utils" } + +[lints] +workspace = true diff --git a/crates/core/emitter/src/event.rs b/crates/core/emitter/src/event.rs index 7552a2cbfc6..81b1ed60934 100644 --- a/crates/core/emitter/src/event.rs +++ b/crates/core/emitter/src/event.rs @@ -3,9 +3,8 @@ use moon_action_context::ActionContext; use moon_platform_runtime::Runtime; use moon_project::Project; use moon_target::Target; -use moon_task::Task; use serde::Serialize; -use std::{path::PathBuf, time::Duration}; +use std::time::Duration; #[derive(Serialize)] #[serde(untagged, rename_all = "camelCase")] @@ -71,44 +70,16 @@ pub enum Event<'e> { // Running targets TargetRunning { + #[serde(skip)] + action: &'e Action, target: &'e Target, }, TargetRan { + #[serde(skip)] + action: &'e Action, error: Option, target: &'e Target, }, - TargetOutputArchiving { - hash: &'e str, - project: &'e Project, - target: &'e Target, - task: &'e Task, - }, - #[serde(rename_all = "camelCase")] - TargetOutputArchived { - archive_path: PathBuf, - hash: &'e str, - project: &'e Project, - target: &'e Target, - task: &'e Task, - }, - TargetOutputHydrating { - hash: &'e str, - project: &'e Project, - target: &'e Target, - task: &'e Task, - }, - #[serde(rename_all = "camelCase")] - TargetOutputHydrated { - archive_path: PathBuf, - hash: &'e str, - project: &'e Project, - target: &'e Target, - task: &'e Task, - }, - TargetOutputCacheCheck { - hash: &'e str, - target: &'e Target, - }, // Installing a tool ToolInstalling { @@ -134,11 +105,6 @@ impl<'e> Event<'e> { Event::PipelineFinished { .. } => "pipeline.finished", Event::TargetRunning { .. } => "target.running", Event::TargetRan { .. } => "target.ran", - Event::TargetOutputArchiving { .. } => "target-output.archiving", - Event::TargetOutputArchived { .. } => "target-output.archived", - Event::TargetOutputHydrating { .. } => "target-output.hydrating", - Event::TargetOutputHydrated { .. } => "target-output.hydrated", - Event::TargetOutputCacheCheck { .. } => "target-output.cache-check", Event::ToolInstalling { .. } => "tool.installing", Event::ToolInstalled { .. } => "tool.installed", Event::WorkspaceSyncing => "workspace.syncing", diff --git a/crates/core/lang/Cargo.toml b/crates/core/lang/Cargo.toml index 4dfa42b9a40..2f232b78c0f 100644 --- a/crates/core/lang/Cargo.toml +++ b/crates/core/lang/Cargo.toml @@ -7,3 +7,6 @@ publish = false [dependencies] miette = { workspace = true } rustc-hash = { workspace = true } + +[lints] +workspace = true diff --git a/crates/core/logger/Cargo.toml b/crates/core/logger/Cargo.toml index ae649c1ec10..d012ab35d0c 100644 --- a/crates/core/logger/Cargo.toml +++ b/crates/core/logger/Cargo.toml @@ -8,3 +8,6 @@ publish = false console = { workspace = true } log = "0.4.21" starbase_styles = { workspace = true } + +[lints] +workspace = true diff --git a/crates/core/moon/Cargo.toml b/crates/core/moon/Cargo.toml index 1ea1935c7e3..304ac8e8987 100644 --- a/crates/core/moon/Cargo.toml +++ b/crates/core/moon/Cargo.toml @@ -23,3 +23,6 @@ proto_core = { workspace = true } rustc-hash = { workspace = true } starbase_events = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/crates/core/notifier/Cargo.toml b/crates/core/notifier/Cargo.toml index 9783d4c83cc..3a2f381a10e 100644 --- a/crates/core/notifier/Cargo.toml +++ b/crates/core/notifier/Cargo.toml @@ -18,3 +18,6 @@ serde_json = { workspace = true } starbase_styles = { workspace = true } tokio = { workspace = true } uuid = { workspace = true } + +[lints] +workspace = true diff --git a/crates/core/platform/Cargo.toml b/crates/core/platform/Cargo.toml index 80437971379..46912e442eb 100644 --- a/crates/core/platform/Cargo.toml +++ b/crates/core/platform/Cargo.toml @@ -18,3 +18,6 @@ async-trait = { workspace = true } miette = { workspace = true } rustc-hash = { workspace = true } serde = { workspace = true } + +[lints] +workspace = true diff --git a/crates/core/runner/Cargo.toml b/crates/core/runner/Cargo.toml deleted file mode 100644 index a0a200baff5..00000000000 --- a/crates/core/runner/Cargo.toml +++ /dev/null @@ -1,41 +0,0 @@ -[package] -name = "moon_runner" -version = "0.0.1" -edition = "2021" -publish = false - -[dependencies] -moon_action = { path = "../../../nextgen/action" } -moon_action_context = { path = "../../../nextgen/action-context" } -moon_cache_item = { path = "../../../nextgen/cache-item" } -moon_common = { path = "../../../nextgen/common" } -moon_config = { path = "../../../nextgen/config" } -moon_console = { path = "../../../nextgen/console" } -moon_emitter = { path = "../emitter" } -moon_hash = { path = "../../../nextgen/hash" } -moon_logger = { path = "../logger" } -moon_platform = { path = "../platform" } -moon_platform_runtime = { path = "../../../nextgen/platform-runtime" } -moon_process = { path = "../../../nextgen/process" } -moon_project = { path = "../../../nextgen/project" } -moon_target = { path = "../../../nextgen/target" } -moon_task = { path = "../../../nextgen/task" } -moon_task_hasher = { path = "../../../nextgen/task-hasher" } -moon_tool = { path = "../tool" } -moon_utils = { path = "../utils" } -moon_vcs = { path = "../../../nextgen/vcs" } -moon_workspace = { path = "../../../nextgen/workspace" } -console = { workspace = true } -miette = { workspace = true } -rustc-hash = { workspace = true } -serde = { workspace = true } -serde_json = { workspace = true } -starbase_archive = { workspace = true } -starbase_styles = { workspace = true } -starbase_utils = { workspace = true } -thiserror = { workspace = true } -tokio = { workspace = true } - -[dev-dependencies] -moon = { path = "../moon" } -moon_test_utils = { path = "../test-utils" } diff --git a/crates/core/runner/src/lib.rs b/crates/core/runner/src/lib.rs deleted file mode 100644 index 99ea4c7591c..00000000000 --- a/crates/core/runner/src/lib.rs +++ /dev/null @@ -1,7 +0,0 @@ -mod errors; -mod run_state; -mod runner; - -pub use errors::*; -pub use run_state::*; -pub use runner::*; diff --git a/crates/core/runner/src/run_state.rs b/crates/core/runner/src/run_state.rs deleted file mode 100644 index 013843240ab..00000000000 --- a/crates/core/runner/src/run_state.rs +++ /dev/null @@ -1,146 +0,0 @@ -use moon_cache_item::{cache_item, get_cache_mode}; -use moon_common::path::WorkspaceRelativePathBuf; -use moon_logger::{map_list, warn}; -use starbase_archive::tar::{TarPacker, TarUnpacker}; -use starbase_archive::Archiver; -use starbase_styles::color; -use starbase_utils::{fs, glob}; -use std::path::{Path, PathBuf}; - -cache_item!( - pub struct RunTargetState { - pub exit_code: i32, - pub hash: String, - pub last_run_time: u128, - pub target: String, - } -); - -fn create_archive<'o>( - workspace_root: &'o Path, - archive_file: &'o Path, - output_paths: &[WorkspaceRelativePathBuf], -) -> Archiver<'o> { - let mut archive = Archiver::new(workspace_root, archive_file); - - // Outputs are relative from the workspace root - if !output_paths.is_empty() { - for output in output_paths { - if glob::is_glob(output) { - archive.add_source_glob(output.as_str()); - } else { - archive.add_source_file(output.as_str(), None); - } - } - } - - archive -} - -pub fn archive_outputs( - state_dir: &Path, - archive_file: &Path, - workspace_root: &Path, - output_paths: &[WorkspaceRelativePathBuf], -) -> miette::Result { - if get_cache_mode().is_writable() && !archive_file.exists() { - let mut archive = create_archive(workspace_root, archive_file, output_paths); - - // Also include stdout/stderr logs at the root of the tarball - let (stdout_path, stderr_path) = get_output_logs(state_dir); - - if stdout_path.exists() { - archive.add_source_file(stdout_path, Some("stdout.log")); - } - - if stderr_path.exists() { - archive.add_source_file(stderr_path, Some("stderr.log")); - } - - archive.pack(TarPacker::new_gz)?; - - return Ok(true); - } - - Ok(false) -} - -pub fn hydrate_outputs( - state_dir: &Path, - archive_file: &Path, - workspace_root: &Path, - output_paths: &[WorkspaceRelativePathBuf], -) -> miette::Result { - if get_cache_mode().is_readable() && archive_file.exists() { - let archive = create_archive(workspace_root, archive_file, output_paths); - let cache_logs = get_output_logs(state_dir); - let stdout_log = workspace_root.join("stdout.log"); - let stderr_log = workspace_root.join("stderr.log"); - - match archive.unpack(TarUnpacker::new_gz) { - Ok(_) => { - if stdout_log.exists() { - fs::rename(&stdout_log, cache_logs.0)?; - } - - if stderr_log.exists() { - fs::rename(&stderr_log, cache_logs.1)?; - } - } - Err(e) => { - warn!( - "Failed to hydrate outputs ({}) from cache: {}", - map_list(output_paths, |f| color::file(f)), - color::muted_light(e.to_string()) - ); - - // Delete target outputs to ensure a clean slate - for output in output_paths { - if !glob::is_glob(output) { - fs::remove(output.to_path(workspace_root))?; - } - } - - fs::remove(stdout_log)?; - fs::remove(stderr_log)?; - } - } - - return Ok(true); - } - - Ok(false) -} - -pub fn get_output_logs(state_dir: &Path) -> (PathBuf, PathBuf) { - (state_dir.join("stdout.log"), state_dir.join("stderr.log")) -} - -/// Load the stdout.log and stderr.log files from the cache directory. -pub fn load_output_logs(state_dir: &Path) -> miette::Result<(String, String)> { - let (stdout_path, stderr_path) = get_output_logs(state_dir); - - let stdout = if stdout_path.exists() { - fs::read_file(stdout_path)? - } else { - String::new() - }; - - let stderr = if stderr_path.exists() { - fs::read_file(stderr_path)? - } else { - String::new() - }; - - Ok((stdout, stderr)) -} - -/// Write stdout and stderr log files to the cache directory. -pub fn save_output_logs(state_dir: &Path, stdout: String, stderr: String) -> miette::Result<()> { - let (stdout_path, stderr_path) = get_output_logs(state_dir); - - fs::write_file(stdout_path, stdout)?; - fs::write_file(stderr_path, stderr)?; - - Ok(()) -} diff --git a/crates/core/runner/src/runner.rs b/crates/core/runner/src/runner.rs deleted file mode 100644 index b30e25ad4d7..00000000000 --- a/crates/core/runner/src/runner.rs +++ /dev/null @@ -1,1015 +0,0 @@ -use crate::errors::RunnerError; -use crate::run_state::{load_output_logs, save_output_logs, RunTargetState}; -use moon_action::{ActionNode, ActionStatus, Attempt}; -use moon_action_context::{ActionContext, TargetState}; -use moon_cache_item::CacheItem; -use moon_config::{TaskOptionAffectedFiles, TaskOutputStyle}; -use moon_console::{Checkpoint, Console}; -use moon_emitter::{Emitter, Event, EventFlow}; -use moon_hash::ContentHasher; -use moon_logger::{debug, warn}; -use moon_platform::PlatformManager; -use moon_platform_runtime::Runtime; -use moon_process::{args, output_to_error, output_to_string, Command, Output, Shell}; -use moon_project::Project; -use moon_target::{TargetError, TargetScope}; -use moon_task::Task; -use moon_task_hasher::TaskHasher; -use moon_tool::get_proto_env_vars; -use moon_utils::{is_ci, is_test_env, path, time}; -use moon_workspace::Workspace; -use rustc_hash::FxHashMap; -use starbase_styles::color; -use starbase_utils::glob; -use std::collections::BTreeMap; -use std::sync::Arc; -use tokio::{ - task, - time::{sleep, Duration}, -}; - -const LOG_TARGET: &str = "moon:runner"; - -#[cfg(unix)] -fn create_unix_shell(shell: &moon_config::TaskUnixShell) -> Shell { - use moon_config::TaskUnixShell; - - match shell { - TaskUnixShell::Bash => Shell::new("bash"), - TaskUnixShell::Elvish => Shell::new("elvish"), - TaskUnixShell::Fish => Shell::new("fish"), - TaskUnixShell::Zsh => Shell::new("zsh"), - } -} - -#[cfg(windows)] -fn create_windows_shell(shell: &moon_config::TaskWindowsShell) -> Shell { - use moon_config::TaskWindowsShell; - - match shell { - TaskWindowsShell::Bash => Shell::new("bash"), - TaskWindowsShell::Pwsh => Shell::new("pwsh"), - } -} - -pub enum HydrateFrom { - LocalCache, - PreviousOutput, - RemoteCache, -} - -pub struct Runner<'a> { - pub cache: CacheItem, - - pub node: Arc, - - emitter: &'a Emitter, - - project: &'a Project, - - console: Arc, - - task: &'a Task, - - workspace: &'a Workspace, -} - -impl<'a> Runner<'a> { - pub fn new( - emitter: &'a Emitter, - workspace: &'a Workspace, - project: &'a Project, - task: &'a Task, - console: Arc, - ) -> miette::Result> { - let mut cache = workspace - .cache_engine - .state - .load_target_state::(&task.target)?; - - if cache.data.target.is_empty() { - cache.data.target = task.target.to_string(); - } - - Ok(Runner { - cache, - node: Arc::new(ActionNode::None), - emitter, - project, - console, - task, - workspace, - }) - } - - /// Cache outputs to the `.moon/cache/outputs` folder and to the cloud, - /// so that subsequent builds are faster, and any local outputs - /// can be hydrated easily. - pub async fn archive_outputs(&self) -> miette::Result<()> { - let hash = &self.cache.data.hash; - - if hash.is_empty() || !self.is_archivable()? { - return Ok(()); - } - - // Check that outputs actually exist - if !self.task.outputs.is_empty() && !self.has_outputs(false)? { - return Err(RunnerError::MissingOutput(self.task.target.id.clone()).into()); - } - - // If so, then cache the archive - if let EventFlow::Return(archive_path) = self - .emitter - .emit(Event::TargetOutputArchiving { - hash, - project: self.project, - target: &self.task.target, - task: self.task, - }) - .await? - { - self.emitter - .emit(Event::TargetOutputArchived { - archive_path: archive_path.into(), - hash, - project: self.project, - target: &self.task.target, - task: self.task, - }) - .await?; - } - - Ok(()) - } - - pub async fn hydrate(&self, from: HydrateFrom) -> miette::Result { - // Only hydrate when the hash is different from the previous build, - // as we can assume the outputs from the previous build still exist? - if matches!(from, HydrateFrom::LocalCache) || matches!(from, HydrateFrom::RemoteCache) { - self.hydrate_outputs().await?; - } - - let mut comments = vec![match from { - HydrateFrom::LocalCache => "cached", - HydrateFrom::RemoteCache => "cached from remote", - HydrateFrom::PreviousOutput => "cached from previous run", - } - .to_owned()]; - - if self.should_print_short_hash() { - comments.push(self.get_short_hash().to_owned()); - } - - self.print_checkpoint(Checkpoint::RunPassed, comments)?; - self.print_cache_item()?; - - Ok(if matches!(from, HydrateFrom::RemoteCache) { - ActionStatus::CachedFromRemote - } else { - ActionStatus::Cached - }) - } - - /// If we are cached (hash match), hydrate the project with the - /// cached task outputs found in the hashed archive. - pub async fn hydrate_outputs(&self) -> miette::Result<()> { - let hash = &self.cache.data.hash; - - if hash.is_empty() { - return Ok(()); - } - - // Hydrate outputs from the cache - if let EventFlow::Return(archive_path) = self - .emitter - .emit(Event::TargetOutputHydrating { - hash, - project: self.project, - target: &self.task.target, - task: self.task, - }) - .await? - { - self.emitter - .emit(Event::TargetOutputHydrated { - archive_path: archive_path.into(), - hash, - project: self.project, - target: &self.task.target, - task: self.task, - }) - .await?; - } - - // Update the run state with the new hash - self.cache.save()?; - - Ok(()) - } - - /// Create a hasher that is shared amongst all platforms. - /// Primarily includes task information. - pub async fn hash_common_target( - &self, - context: &ActionContext, - hasher: &mut ContentHasher, - ) -> miette::Result<()> { - let mut task_hasher = TaskHasher::new( - self.project, - self.task, - &self.workspace.vcs, - &self.workspace.root, - &self.workspace.config.hasher, - ); - - if context.should_inherit_args(&self.task.target) { - task_hasher.hash_args(&context.passthrough_args); - } - - let mut deps = BTreeMap::default(); - - // todo, move into hasher! - for dep in &self.task.deps { - let mut state = None; - - // TODO avoid cloning if possible - if let Some(entry) = context.target_states.get(&dep.target) { - state = match entry.get() { - TargetState::Completed(hash) => Some(hash.to_owned()), - TargetState::Passthrough => Some("passthrough".into()), - _ => None, - }; - } - - if state.is_none() { - return Err(RunnerError::MissingDependencyHash( - dep.target.id.to_owned(), - self.task.target.id.to_owned(), - ) - .into()); - } - - deps.insert(&dep.target, state.unwrap()); - } - - task_hasher.hash_deps(deps); - task_hasher.hash_inputs().await?; - - hasher.hash_content(task_hasher.hash())?; - - Ok(()) - } - - pub async fn create_command( - &self, - context: &ActionContext, - runtime: &Runtime, - ) -> miette::Result { - let workspace = &self.workspace; - let project = &self.project; - let task = &self.task; - let working_dir = if task.options.run_from_workspace_root { - &workspace.root - } else { - &project.root - }; - - debug!( - target: LOG_TARGET, - "Creating {} command (in working directory {})", - color::label(&task.target), - color::path(working_dir) - ); - - let mut command = PlatformManager::read() - .get(task.platform)? - .create_run_target_command(context, project, task, runtime, working_dir) - .await?; - - command - .cwd(working_dir) - .env("PWD", working_dir) - // We need to handle non-zero's manually - .set_error_on_nonzero(false); - - self.create_env_vars(&mut command).await?; - - // Wrap in a shell - if task.options.shell.is_none() || task.options.shell.is_some_and(|s| !s) { - command.without_shell(); - } else { - #[cfg(unix)] - if let Some(shell) = task.options.unix_shell.as_ref() { - command.with_shell(create_unix_shell(shell)); - } - - #[cfg(windows)] - if let Some(shell) = task.options.windows_shell.as_ref() { - command.with_shell(create_windows_shell(shell)); - } - } - - // Passthrough args - if context.should_inherit_args(&self.task.target) { - command.args(&context.passthrough_args); - } - - // Terminal colors - if self.workspace.config.runner.inherit_colors_for_piped_tasks { - command.inherit_colors(); - } - - // Dependency specific args/env - if let ActionNode::RunTask(inner) = &*self.node { - command.args(inner.args.clone()); - command.envs(inner.env.clone()); - } - - // Affected files (must be last args) - if let Some(check_affected) = &self.task.options.affected_files { - let mut files = if context.affected_only { - self.task - .get_affected_files(&context.touched_files, self.project.source.as_str())? - } else { - Vec::with_capacity(0) - }; - - if files.is_empty() && self.task.options.affected_pass_inputs { - files = self - .task - .get_input_files(&self.workspace.root)? - .into_iter() - .filter_map(|f| { - f.strip_prefix(&self.project.source) - .ok() - .map(ToOwned::to_owned) - }) - .collect(); - - if files.is_empty() { - warn!( - target: LOG_TARGET, - "No input files detected for {}, defaulting to '.'. This will be deprecated in a future version.", - color::label(&task.target), - ); - } - } - - files.sort(); - - if matches!( - check_affected, - TaskOptionAffectedFiles::Env | TaskOptionAffectedFiles::Enabled(true) - ) { - command.env( - "MOON_AFFECTED_FILES", - if files.is_empty() { - ".".into() - } else { - files - .iter() - .map(|f| f.as_str().to_string()) - .collect::>() - .join(",") - }, - ); - } - - if matches!( - check_affected, - TaskOptionAffectedFiles::Args | TaskOptionAffectedFiles::Enabled(true) - ) { - if files.is_empty() { - command.arg_if_missing("."); - } else { - // Mimic relative from ("./") - command.args(files.iter().map(|f| format!("./{f}"))); - } - } - } - - Ok(command) - } - - pub async fn create_env_vars(&self, command: &mut Command) -> miette::Result<()> { - let mut env_vars = FxHashMap::default(); - - env_vars.insert( - "MOON_CACHE_DIR".to_owned(), - path::to_string(&self.workspace.cache_engine.cache_dir)?, - ); - env_vars.insert("MOON_PROJECT_ID".to_owned(), self.project.id.to_string()); - env_vars.insert( - "MOON_PROJECT_ROOT".to_owned(), - path::to_string(&self.project.root)?, - ); - env_vars.insert( - "MOON_PROJECT_SOURCE".to_owned(), - self.project.source.to_string(), - ); - env_vars.insert("MOON_TARGET".to_owned(), self.task.target.id.to_string()); - env_vars.insert( - "MOON_WORKSPACE_ROOT".to_owned(), - path::to_string(&self.workspace.root)?, - ); - env_vars.insert( - "MOON_WORKING_DIR".to_owned(), - path::to_string(&self.workspace.working_dir)?, - ); - env_vars.insert( - "MOON_PROJECT_SNAPSHOT".to_owned(), - path::to_string( - self.workspace - .cache_engine - .state - .get_project_snapshot_path(&self.project.id), - )?, - ); - - command.envs(env_vars); - command.envs(get_proto_env_vars()); - - // Pin versions for each tool in the toolchain - if let Some(bun_config) = &self.workspace.toolchain_config.bun { - if let Some(version) = &bun_config.version { - command.env_if_missing("PROTO_BUN_VERSION", version.to_string()); - } - } - - if let Some(deno_config) = &self.workspace.toolchain_config.deno { - if let Some(version) = &deno_config.version { - command.env_if_missing("PROTO_DENO_VERSION", version.to_string()); - } - } - - if let Some(node_config) = &self.workspace.toolchain_config.node { - if let Some(version) = &node_config.version { - command.env_if_missing("PROTO_NODE_VERSION", version.to_string()); - } - - if let Some(version) = &node_config.npm.version { - command.env_if_missing("PROTO_NPM_VERSION", version.to_string()); - } - - if let Some(pnpm_config) = &node_config.pnpm { - if let Some(version) = &pnpm_config.version { - command.env_if_missing("PROTO_PNPM_VERSION", version.to_string()); - } - } - - if let Some(yarn_config) = &node_config.yarn { - if let Some(version) = &yarn_config.version { - command.env_if_missing("PROTO_YARN_VERSION", version.to_string()); - } - } - - if let Some(bunpm_config) = &node_config.bun { - if let Some(version) = &bunpm_config.version { - command.env_if_missing("PROTO_BUN_VERSION", version.to_string()); - } - } - } - - Ok(()) - } - - pub fn get_short_hash(&self) -> &str { - if self.cache.data.hash.is_empty() { - "" // Empty when cache is disabled - } else { - &self.cache.data.hash[0..8] - } - } - - pub fn has_outputs(&self, bypass_globs: bool) -> miette::Result { - // If using globs, we have no way to truly determine if all outputs - // exist on the current file system, so always hydrate... - if bypass_globs && !self.task.output_globs.is_empty() { - return Ok(false); - } - - // Check paths first since they are literal - for output in &self.task.output_files { - if !output.to_path(&self.workspace.root).exists() { - return Ok(false); - } - } - - // Check globs last, as they are costly - if !self.task.output_globs.is_empty() { - let outputs = glob::walk_files(&self.workspace.root, &self.task.output_globs)?; - - return Ok(!outputs.is_empty()); - } - - Ok(true) - } - - /// Determine if the current task can be archived. - pub fn is_archivable(&self) -> miette::Result { - let task = self.task; - - if task.is_build_type() { - return Ok(true); - } - - for target in &self.workspace.config.runner.archivable_targets { - let is_matching_task = task.target.task_id == target.task_id; - - match &target.scope { - TargetScope::All => { - if is_matching_task { - return Ok(true); - } - } - TargetScope::Project(project_locator) => { - if let Some(owner_id) = task.target.get_project_id() { - if owner_id == project_locator && is_matching_task { - return Ok(true); - } - } - } - TargetScope::Tag(tag_id) => { - if self.project.config.tags.contains(tag_id) && is_matching_task { - return Ok(true); - } - } - TargetScope::Deps => return Err(TargetError::NoDepsInRunContext.into()), - TargetScope::OwnSelf => return Err(TargetError::NoSelfInRunContext.into()), - }; - } - - Ok(false) - } - - /// Hash the target based on all current parameters and return early - /// if this target hash has already been cached. Based on the state - /// of the target and project, determine the hydration strategy as well. - pub async fn is_cached( - &mut self, - context: &ActionContext, - runtime: &Runtime, - ) -> miette::Result> { - let mut hasher = self - .workspace - .cache_engine - .hash - .create_hasher(format!("Run {} target", self.task.target)); - - self.hash_common_target(context, &mut hasher).await?; - - PlatformManager::read() - .get(self.task.platform)? - .hash_run_target( - self.project, - runtime, - &mut hasher, - &self.workspace.config.hasher, - ) - .await?; - - let hash = hasher.generate_hash()?; - - debug!( - target: LOG_TARGET, - "Generated hash {} for target {}", - color::hash(&hash), - color::id(&self.task.target) - ); - - context.set_target_state(&self.task.target, TargetState::Completed(hash.clone())); - - // Hash is the same as the previous build, so simply abort! - // However, ensure the outputs also exist, otherwise we should hydrate - if self.cache.data.exit_code == 0 - && self.cache.data.hash == hash - && self.has_outputs(true)? - { - debug!( - target: LOG_TARGET, - "Cache hit for hash {}, reusing previous build", - color::hash(&hash), - ); - - return Ok(Some(HydrateFrom::PreviousOutput)); - } - - self.cache.data.hash = hash.clone(); - - // Refresh the hash manifest - self.workspace.cache_engine.hash.save_manifest(hasher)?; - - // Check if that hash exists in the cache - if let EventFlow::Return(value) = self - .emitter - .emit(Event::TargetOutputCacheCheck { - hash: &hash, - target: &self.task.target, - }) - .await? - { - match value.as_ref() { - "local-cache" => { - debug!( - target: LOG_TARGET, - "Cache hit for hash {}, hydrating from local cache", - color::hash(&hash), - ); - - return Ok(Some(HydrateFrom::LocalCache)); - } - "remote-cache" => { - debug!( - target: LOG_TARGET, - "Cache hit for hash {}, hydrating from remote cache", - color::hash(&hash), - ); - - return Ok(Some(HydrateFrom::RemoteCache)); - } - _ => {} - } - } - - debug!( - target: LOG_TARGET, - "Cache miss for hash {}, continuing run", - color::hash(&hash), - ); - - Ok(None) - } - - /// Run the command as a child process and capture its output. If the process fails - /// and `retry_count` is greater than 0, attempt the process again in case it passes. - pub async fn run_command( - &mut self, - context: &ActionContext, - command: &mut Command, - ) -> miette::Result> { - let attempt_total = self.task.options.retry_count + 1; - let mut attempt_index = 1; - let mut attempts = vec![]; - let primary_longest_width = context.primary_targets.iter().map(|t| t.id.len()).max(); - let is_primary = context.primary_targets.contains(&self.task.target); - let is_real_ci = is_ci() && !is_test_env(); - let is_persistent = self.node.is_persistent() || self.task.is_persistent(); - let output; - let error; - - // When a task is configured as local (no caching), or the interactive flag is passed, - // we don't "capture" stdout/stderr (which breaks stdin) and let it stream natively. - let is_interactive = (!self.task.options.cache && context.primary_targets.len() == 1) - || self.node.is_interactive() - || self.task.is_interactive(); - - // When the primary target, always stream the output for a better developer experience. - // However, transitive targets can opt into streaming as well. - let should_stream_output = if let Some(output_style) = &self.task.options.output_style { - matches!(output_style, TaskOutputStyle::Stream) - } else { - is_primary || is_real_ci - }; - - // Transitive targets may run concurrently, so differentiate them with a prefix. - let stream_prefix = if is_real_ci || !is_primary || context.primary_targets.len() > 1 { - Some(&self.task.target.id) - } else { - None - }; - - // For long-running process, log a message every 30 seconds to indicate it's still running - let console_clone = self.console.clone(); - let interval_target = self.task.target.clone(); - let interval_handle = task::spawn(async move { - if is_persistent || is_interactive { - return; - } - - let mut secs = 0; - - loop { - sleep(Duration::from_secs(30)).await; - secs += 30; - - let _ = console_clone.out.print_checkpoint_with_comments( - Checkpoint::RunStarted, - &interval_target, - [format!("running for {}s", secs)], - ); - } - }); - - command.with_console(self.console.clone()); - - loop { - let mut attempt = Attempt::new(attempt_index); - - self.print_target_label(Checkpoint::RunStarted, &attempt, attempt_total)?; - self.print_target_command(context, command)?; - - let possible_output = if should_stream_output { - if let Some(prefix) = stream_prefix { - command.set_prefix(prefix, primary_longest_width); - } - - if is_interactive { - command.create_async().exec_stream_output().await - } else { - command - .create_async() - .exec_stream_and_capture_output() - .await - } - } else { - command.create_async().exec_capture_output().await - }; - - match possible_output { - // zero and non-zero exit codes - Ok(out) => { - attempt.finish(if out.status.success() { - ActionStatus::Passed - } else { - ActionStatus::Failed - }); - - if should_stream_output { - self.handle_streamed_output(&mut attempt, attempt_total, &out)?; - } else { - self.handle_captured_output(&mut attempt, attempt_total, &out)?; - } - - attempts.push(attempt); - - if out.status.success() { - error = None; - output = out; - - break; - } else if attempt_index >= attempt_total { - error = Some(RunnerError::RunFailed { - target: self.task.target.id.clone(), - query: format!( - "moon query hash {}", - if is_test_env() { - "hash1234" - } else { - self.get_short_hash() - } - ), - error: output_to_error(self.task.command.clone(), &out, false), - }); - output = out; - - break; - } else { - attempt_index += 1; - - warn!( - target: LOG_TARGET, - "Target {} failed, running again with attempt {} (exit code {})", - color::label(&self.task.target), - attempt_index, - out.status.code().unwrap_or(-1) - ); - } - } - // process itself failed - Err(error) => { - attempt.finish(ActionStatus::Failed); - attempts.push(attempt); - - interval_handle.abort(); - - return Err(error); - } - } - } - - interval_handle.abort(); - - // Write the cache with the result and output - self.cache.data.exit_code = output.status.code().unwrap_or(0); - - save_output_logs( - self.cache.get_dir(), - output_to_string(&output.stdout), - output_to_string(&output.stderr), - )?; - - if let Some(error) = error { - return Err(error.into()); - } - - Ok(attempts) - } - - pub async fn create_and_run_command( - &mut self, - context: &ActionContext, - runtime: &Runtime, - ) -> miette::Result> { - let result = if self.task.is_no_op() { - debug!( - target: LOG_TARGET, - "Target {} is a no operation, skipping", - color::label(&self.task.target), - ); - - self.print_target_label(Checkpoint::RunPassed, &Attempt::new(0), 0)?; - - Ok(vec![]) - } else { - let mut command = self.create_command(context, runtime).await?; - - self.run_command(context, &mut command).await - }; - - self.cache.data.last_run_time = time::now_millis(); - self.cache.save()?; - - result - } - - pub fn print_cache_item(&self) -> miette::Result<()> { - let item = &self.cache; - let (stdout, stderr) = load_output_logs(item.get_dir())?; - - self.print_output_with_style(&stdout, &stderr, item.data.exit_code != 0)?; - - Ok(()) - } - - pub fn print_checkpoint>( - &self, - checkpoint: Checkpoint, - comments: C, - ) -> miette::Result<()> { - self.console - .out - .print_checkpoint_with_comments(checkpoint, &self.task.target, comments)?; - - Ok(()) - } - - pub fn print_output_with_style( - &self, - stdout: &str, - stderr: &str, - failed: bool, - ) -> miette::Result<()> { - let print_stdout = || -> miette::Result<()> { self.console.out.write_line(stdout) }; - let print_stderr = || -> miette::Result<()> { self.console.err.write_line(stderr) }; - - match self.task.options.output_style { - // Only show output on failure - Some(TaskOutputStyle::BufferOnlyFailure) => { - if failed { - print_stdout()?; - print_stderr()?; - } - } - // Only show the hash - Some(TaskOutputStyle::Hash) => { - let hash = &self.cache.data.hash; - - if !hash.is_empty() { - // Print to stderr so it can be captured - self.console.err.write_line(hash)?; - } - } - // Show nothing - Some(TaskOutputStyle::None) => {} - // Show output on both success and failure - _ => { - print_stdout()?; - print_stderr()?; - } - }; - - Ok(()) - } - - pub fn print_target_command( - &self, - context: &ActionContext, - command: &Command, - ) -> miette::Result<()> { - if !self.workspace.config.runner.log_running_command { - return Ok(()); - } - - let task = &self.task; - let mut args = vec![&task.command]; - args.extend(&task.args); - - if context.should_inherit_args(&task.target) { - args.extend(&context.passthrough_args); - } - - let command_line = args::join_args(args); - - let message = color::muted_light(command.inspect().format_command( - &command_line, - &self.workspace.root, - Some(if task.options.run_from_workspace_root { - &self.workspace.root - } else { - &self.project.root - }), - )); - - self.console.out.write_line(message)?; - - Ok(()) - } - - pub fn print_target_label( - &self, - checkpoint: Checkpoint, - attempt: &Attempt, - attempt_total: u8, - ) -> miette::Result<()> { - let mut comments = vec![]; - - if self.task.is_no_op() { - comments.push("no op".to_owned()); - } else if attempt.index > 1 { - comments.push(format!("{}/{}", attempt.index, attempt_total)); - } - - if let Some(duration) = attempt.duration { - comments.push(time::elapsed(duration)); - } - - if self.should_print_short_hash() && attempt.finished_at.is_some() { - comments.push(self.get_short_hash().to_owned()); - } - - self.print_checkpoint(checkpoint, comments)?; - - Ok(()) - } - - // Print label *after* output has been captured, so parallel tasks - // aren't intertwined and the labels align with the output. - fn handle_captured_output( - &self, - attempt: &mut Attempt, - attempt_total: u8, - output: &Output, - ) -> miette::Result<()> { - self.print_target_label( - if output.status.success() { - Checkpoint::RunPassed - } else { - Checkpoint::RunFailed - }, - attempt, - attempt_total, - )?; - - let stdout = output_to_string(&output.stdout); - let stderr = output_to_string(&output.stderr); - - self.print_output_with_style(&stdout, &stderr, !output.status.success())?; - - attempt.exit_code = output.status.code(); - attempt.stdout = Some(stdout); - attempt.stderr = Some(stderr); - - Ok(()) - } - - // Only print the label when the process has failed, - // as the actual output has already been streamed to the console. - fn handle_streamed_output( - &self, - attempt: &mut Attempt, - attempt_total: u8, - output: &Output, - ) -> miette::Result<()> { - self.print_target_label( - if output.status.success() { - Checkpoint::RunPassed - } else { - Checkpoint::RunFailed - }, - attempt, - attempt_total, - )?; - - attempt.exit_code = output.status.code(); - attempt.stdout = Some(output_to_string(&output.stdout)); - attempt.stderr = Some(output_to_string(&output.stderr)); - - Ok(()) - } - - fn should_print_short_hash(&self) -> bool { - // Do not include the hash while testing, as the hash - // constantly changes and breaks our local snapshots - !is_test_env() && self.task.options.cache && !self.cache.data.hash.is_empty() - } -} diff --git a/crates/core/runner/tests/runner_test.rs b/crates/core/runner/tests/runner_test.rs deleted file mode 100644 index ef5f149b407..00000000000 --- a/crates/core/runner/tests/runner_test.rs +++ /dev/null @@ -1,96 +0,0 @@ -use std::sync::Arc; - -use moon::{generate_project_graph, load_workspace_from_sandbox}; -use moon_action_context::ActionContext; -use moon_config::PlatformType; -use moon_console::Console; -use moon_emitter::Emitter; -use moon_platform_runtime::{Runtime, RuntimeReq}; -use moon_runner::Runner; -use moon_test_utils::{create_sandbox_with_config, get_cases_fixture_configs, Sandbox}; -use rustc_hash::FxHashSet; - -fn cases_sandbox() -> Sandbox { - let (workspace_config, toolchain_config, tasks_config) = get_cases_fixture_configs(); - - create_sandbox_with_config( - "cases", - Some(workspace_config), - Some(toolchain_config), - Some(tasks_config), - ) -} - -#[tokio::test] -async fn all_inputs_when_no_files_affected() { - let sandbox = cases_sandbox(); - sandbox.enable_git(); - - let mut workspace = load_workspace_from_sandbox(sandbox.path()).await.unwrap(); - - let project_graph = generate_project_graph(&mut workspace).await.unwrap(); - - let project = project_graph.get("noAffected").unwrap(); - let task = project.get_task("primary").unwrap(); - let emitter = Emitter::new(Arc::new(workspace.clone())); - - let runner = Runner::new( - &emitter, - &workspace, - &project, - task, - Arc::new(Console::new_testing()), - ) - .unwrap(); - - let cmd = runner - .create_command( - &ActionContext { - affected_only: true, - touched_files: FxHashSet::from_iter([]), - ..Default::default() - }, - &Runtime::new(PlatformType::Node, RuntimeReq::Global), - ) - .await - .unwrap(); - - assert_eq!(cmd.args, vec!["./affected.js", "./file.txt"]); -} - -#[tokio::test] -async fn dot_if_no_input_files() { - let sandbox = cases_sandbox(); - sandbox.enable_git(); - - let mut workspace = load_workspace_from_sandbox(sandbox.path()).await.unwrap(); - - let project_graph = generate_project_graph(&mut workspace).await.unwrap(); - - let project = project_graph.get("noAffected").unwrap(); - let task = project.get_task("misconfigured").unwrap(); - let emitter = Emitter::new(Arc::new(workspace.clone())); - - let runner = Runner::new( - &emitter, - &workspace, - &project, - task, - Arc::new(Console::new_testing()), - ) - .unwrap(); - - let cmd = runner - .create_command( - &ActionContext { - affected_only: true, - touched_files: FxHashSet::from_iter([]), - ..Default::default() - }, - &Runtime::new(PlatformType::Node, RuntimeReq::Global), - ) - .await - .unwrap(); - - assert_eq!(cmd.args, vec!["./affected.js", "."]); -} diff --git a/crates/core/test-utils/Cargo.toml b/crates/core/test-utils/Cargo.toml index 632ab342ea9..123da21284a 100644 --- a/crates/core/test-utils/Cargo.toml +++ b/crates/core/test-utils/Cargo.toml @@ -18,3 +18,6 @@ serde_yaml = { workspace = true } starbase_utils = { workspace = true } # symlink = "0.1.0" tokio = { workspace = true, features = ["full", "test-util"] } + +[lints] +workspace = true diff --git a/crates/core/test-utils/src/cli.rs b/crates/core/test-utils/src/cli.rs index 17e412268de..0463209fc94 100644 --- a/crates/core/test-utils/src/cli.rs +++ b/crates/core/test-utils/src/cli.rs @@ -87,8 +87,8 @@ impl<'s> SandboxAssert<'s> { let output = self.inner.get_output(); - println!("STDOUT:\n{}\n", output_to_string(&output.stdout)); println!("STDERR:\n{}\n", output_to_string(&output.stderr)); + println!("STDOUT:\n{}\n", output_to_string(&output.stdout)); println!("STATUS:\n{:#?}", output.status); self diff --git a/crates/core/tool/Cargo.toml b/crates/core/tool/Cargo.toml index d164dbb5db5..66f3db3e0b2 100644 --- a/crates/core/tool/Cargo.toml +++ b/crates/core/tool/Cargo.toml @@ -16,3 +16,6 @@ rustc-hash = { workspace = true } starbase_styles = { workspace = true } thiserror = { workspace = true } warpgate = { workspace = true } + +[lints] +workspace = true diff --git a/crates/core/utils/Cargo.toml b/crates/core/utils/Cargo.toml index c617b319dd5..64d4688ca5c 100644 --- a/crates/core/utils/Cargo.toml +++ b/crates/core/utils/Cargo.toml @@ -28,3 +28,6 @@ tokio = { workspace = true } [dev-dependencies] moon_test_utils = { path = "../test-utils" } + +[lints] +workspace = true diff --git a/crates/deno/lang/Cargo.toml b/crates/deno/lang/Cargo.toml index 390f83fdeea..d87358d8e9b 100644 --- a/crates/deno/lang/Cargo.toml +++ b/crates/deno/lang/Cargo.toml @@ -16,3 +16,6 @@ serde = { workspace = true } serde_json = { workspace = true, features = ["preserve_order"] } starbase_styles = { workspace = true } starbase_utils = { workspace = true } + +[lints] +workspace = true diff --git a/crates/deno/platform/Cargo.toml b/crates/deno/platform/Cargo.toml index eb9930439bf..369caf8a601 100644 --- a/crates/deno/platform/Cargo.toml +++ b/crates/deno/platform/Cargo.toml @@ -27,3 +27,6 @@ rustc-hash = { workspace = true } serde = { workspace = true } serde_json = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/crates/deno/tool/Cargo.toml b/crates/deno/tool/Cargo.toml index 858119ea6a8..c93d86740e8 100644 --- a/crates/deno/tool/Cargo.toml +++ b/crates/deno/tool/Cargo.toml @@ -17,3 +17,6 @@ proto_core = { workspace = true } rustc-hash = { workspace = true } starbase_utils = { workspace = true } tracing = { workspace = true } + +[lints] +workspace = true diff --git a/crates/javascript/platform/Cargo.toml b/crates/javascript/platform/Cargo.toml index 4ae120dd048..12dbbc0e10a 100644 --- a/crates/javascript/platform/Cargo.toml +++ b/crates/javascript/platform/Cargo.toml @@ -21,3 +21,6 @@ serde = { workspace = true } starbase_styles = { workspace = true } starbase_utils = { workspace = true } tracing = { workspace = true } + +[lints] +workspace = true diff --git a/crates/node/lang/Cargo.toml b/crates/node/lang/Cargo.toml index 29b0b0edbc3..4ee77a73d65 100644 --- a/crates/node/lang/Cargo.toml +++ b/crates/node/lang/Cargo.toml @@ -26,3 +26,6 @@ yarn-lock-parser = "0.7.0" [dev-dependencies] moon_test_utils = { path = "../../core/test-utils" } reqwest = { workspace = true, features = ["blocking"] } + +[lints] +workspace = true diff --git a/crates/node/platform/Cargo.toml b/crates/node/platform/Cargo.toml index 11134cbe909..cdb816d388f 100644 --- a/crates/node/platform/Cargo.toml +++ b/crates/node/platform/Cargo.toml @@ -40,3 +40,6 @@ tokio = { workspace = true } moon = { path = "../../core/moon" } moon_project_graph = { path = "../../../nextgen/project-graph" } moon_test_utils = { path = "../../core/test-utils" } + +[lints] +workspace = true diff --git a/crates/node/tool/Cargo.toml b/crates/node/tool/Cargo.toml index 88792ea22d1..13479f84de2 100644 --- a/crates/node/tool/Cargo.toml +++ b/crates/node/tool/Cargo.toml @@ -18,3 +18,6 @@ proto_core = { workspace = true } rustc-hash = { workspace = true } starbase_styles = { workspace = true } starbase_utils = { workspace = true } + +[lints] +workspace = true diff --git a/crates/rust/lang/Cargo.toml b/crates/rust/lang/Cargo.toml index d5f4b9722d8..2b6fe58fe97 100644 --- a/crates/rust/lang/Cargo.toml +++ b/crates/rust/lang/Cargo.toml @@ -18,3 +18,6 @@ starbase_utils = { workspace = true } [dev-dependencies] moon_test_utils = { path = "../../core/test-utils" } + +[lints] +workspace = true diff --git a/crates/rust/platform/Cargo.toml b/crates/rust/platform/Cargo.toml index 6ea3c13e611..e94c17c9879 100644 --- a/crates/rust/platform/Cargo.toml +++ b/crates/rust/platform/Cargo.toml @@ -33,3 +33,6 @@ tokio = { workspace = true } moon = { path = "../../core/moon" } moon_project_graph = { path = "../../../nextgen/project-graph" } moon_test_utils = { path = "../../core/test-utils" } + +[lints] +workspace = true diff --git a/crates/rust/tool/Cargo.toml b/crates/rust/tool/Cargo.toml index 22da9cd8ca9..ac8c0d4686a 100644 --- a/crates/rust/tool/Cargo.toml +++ b/crates/rust/tool/Cargo.toml @@ -15,3 +15,6 @@ moon_utils = { path = "../../core/utils" } miette = { workspace = true } proto_core = { workspace = true } rustc-hash = { workspace = true } + +[lints] +workspace = true diff --git a/crates/system/platform/Cargo.toml b/crates/system/platform/Cargo.toml index 9c977509263..166c6110d5d 100644 --- a/crates/system/platform/Cargo.toml +++ b/crates/system/platform/Cargo.toml @@ -19,3 +19,6 @@ miette = { workspace = true } proto_core = { workspace = true } serde = { workspace = true } serde_json = { workspace = true } + +[lints] +workspace = true diff --git a/crates/typescript/lang/Cargo.toml b/crates/typescript/lang/Cargo.toml index 592bec19f88..5127530eaf6 100644 --- a/crates/typescript/lang/Cargo.toml +++ b/crates/typescript/lang/Cargo.toml @@ -18,3 +18,6 @@ typescript_tsconfig_json = { version = "0.1.4", features = ["serialize"] } [dev-dependencies] moon_test_utils = { path = "../../core/test-utils" } + +[lints] +workspace = true diff --git a/crates/typescript/platform/Cargo.toml b/crates/typescript/platform/Cargo.toml index 41e2a44239c..be3b4340624 100644 --- a/crates/typescript/platform/Cargo.toml +++ b/crates/typescript/platform/Cargo.toml @@ -23,3 +23,6 @@ tracing = { workspace = true } [dev-dependencies] moon_common = { path = "../../../nextgen/common" } moon_test_utils = { path = "../../core/test-utils" } + +[lints] +workspace = true diff --git a/nextgen/action-context/Cargo.toml b/nextgen/action-context/Cargo.toml index 7d6097b7e35..54fc860224e 100644 --- a/nextgen/action-context/Cargo.toml +++ b/nextgen/action-context/Cargo.toml @@ -13,3 +13,6 @@ rustc-hash = { workspace = true } scc = { workspace = true, features = ["serde"] } serde = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/action-context/src/lib.rs b/nextgen/action-context/src/lib.rs index c73af1961ca..3e2ea03f918 100644 --- a/nextgen/action-context/src/lib.rs +++ b/nextgen/action-context/src/lib.rs @@ -14,18 +14,18 @@ pub enum ProfileType { Heap, } -#[derive(Clone, Debug, Deserialize, Serialize)] +#[derive(Clone, Debug, Deserialize, PartialEq, Serialize)] #[serde(tag = "state", content = "hash", rename_all = "lowercase")] pub enum TargetState { - Completed(String), + Passed(String), // hash + Passthrough, // no hash (cache off) Failed, Skipped, - Passthrough, } impl TargetState { pub fn is_complete(&self) -> bool { - matches!(self, TargetState::Completed(_) | TargetState::Passthrough) + matches!(self, TargetState::Passed(_) | TargetState::Passthrough) } } @@ -78,6 +78,10 @@ impl ActionContext { mutex } + pub fn is_primary_target>(&self, target: T) -> bool { + self.primary_targets.contains(target.as_ref()) + } + pub fn set_target_state>(&self, target: T, state: TargetState) { let _ = self.target_states.insert(target.as_ref().to_owned(), state); } @@ -96,8 +100,14 @@ impl ActionContext { // :task == scope:task for locator in &self.initial_targets { - if target.is_all_task(locator.as_str()) { - return true; + // if target.is_all_task(locator.as_str()) { + // return true; + // } + + if let TargetLocator::Qualified(inner) = locator { + if inner.is_all_task(&target.task_id) { + return true; + } } } diff --git a/nextgen/action-graph/Cargo.toml b/nextgen/action-graph/Cargo.toml index f4976d79997..19cd61d8d4e 100644 --- a/nextgen/action-graph/Cargo.toml +++ b/nextgen/action-graph/Cargo.toml @@ -31,3 +31,6 @@ tracing = { workspace = true } moon_config = { path = "../config" } moon_test_utils2 = { path = "../test-utils" } starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/action-graph/src/action_graph.rs b/nextgen/action-graph/src/action_graph.rs index a5502388066..07f7d72b67f 100644 --- a/nextgen/action-graph/src/action_graph.rs +++ b/nextgen/action-graph/src/action_graph.rs @@ -31,6 +31,10 @@ impl ActionGraph { self.get_node_count() == 0 } + pub fn get_nodes(&self) -> Vec<&ActionNode> { + self.graph.node_weights().collect() + } + pub fn get_node_count(&self) -> usize { self.graph.node_count() } diff --git a/nextgen/action/Cargo.toml b/nextgen/action/Cargo.toml index 9c52078c5c7..906e5695fc3 100644 --- a/nextgen/action/Cargo.toml +++ b/nextgen/action/Cargo.toml @@ -12,3 +12,6 @@ moon_time = { path = "../time" } miette = { workspace = true } rustc-hash = { workspace = true } serde = { workspace = true, features = ["rc"] } + +[lints] +workspace = true diff --git a/nextgen/action/src/action.rs b/nextgen/action/src/action.rs index e7aee9edc61..caf5ecc3e47 100644 --- a/nextgen/action/src/action.rs +++ b/nextgen/action/src/action.rs @@ -1,5 +1,5 @@ use crate::action_node::ActionNode; -use crate::attempt::Attempt; +use crate::operation_list::OperationList; use moon_common::color; use moon_time::chrono::NaiveDateTime; use moon_time::now_timestamp; @@ -7,7 +7,7 @@ use serde::{Deserialize, Serialize}; use std::sync::Arc; use std::time::{Duration, Instant}; -#[derive(Copy, Clone, Debug, Default, Deserialize, Serialize)] +#[derive(Copy, Clone, Debug, Default, Deserialize, PartialEq, Serialize)] #[serde(rename_all = "kebab-case")] pub enum ActionStatus { Cached, @@ -26,8 +26,6 @@ pub enum ActionStatus { pub struct Action { pub allow_failure: bool, - pub attempts: Option>, - pub created_at: NaiveDateTime, pub duration: Option, @@ -47,6 +45,8 @@ pub struct Action { pub node_index: usize, + pub operations: OperationList, + pub started_at: Option, #[serde(skip)] @@ -59,7 +59,6 @@ impl Action { pub fn new(node: ActionNode) -> Self { Action { allow_failure: false, - attempts: None, created_at: now_timestamp(), duration: None, error: None, @@ -69,6 +68,7 @@ impl Action { label: node.label(), node: Arc::new(node), node_index: 0, + operations: OperationList::default(), started_at: None, start_time: None, status: ActionStatus::Running, @@ -96,7 +96,6 @@ impl Action { pub fn fail(&mut self, error: miette::Report) { self.error = Some(error.to_string()); self.error_report = Some(error); - self.finish(ActionStatus::Failed); } pub fn has_failed(&self) -> bool { @@ -118,31 +117,25 @@ impl Action { miette::miette!("Unknown error!") } - pub fn set_attempts(&mut self, attempts: Vec, command: &str) -> bool { - let some_failed = attempts.iter().any(|attempt| attempt.has_failed()); - let mut passed = false; + pub fn set_operations(&mut self, operations: OperationList, command: &str) { + if let Some(last_attempt) = operations.get_last_process() { + if last_attempt.has_failed() { + if let Some(output) = &last_attempt.output { + let mut message = format!("Failed to run {}", color::shell(command)); - if let Some(last) = attempts.last() { - if last.has_failed() { - let mut message = format!("Failed to run {}", color::shell(command)); + if let Some(code) = output.exit_code { + message += " "; + message += color::muted_light(format!("(exit code {})", code)).as_str(); + } - if let Some(code) = last.exit_code { - message += " "; - message += color::muted_light(format!("(exit code {})", code)).as_str(); + self.error = Some(message); } - - self.error = Some(message); - } else { - passed = true; } - } else { - passed = true; } - self.attempts = Some(attempts); - self.flaky = some_failed && passed; - - passed + self.flaky = operations.is_flaky(); + self.status = operations.get_final_status(); + self.operations = operations; } pub fn should_abort(&self) -> bool { diff --git a/nextgen/action/src/attempt.rs b/nextgen/action/src/attempt.rs deleted file mode 100644 index c767cb7dbdf..00000000000 --- a/nextgen/action/src/attempt.rs +++ /dev/null @@ -1,60 +0,0 @@ -use crate::action::ActionStatus; -use moon_time::chrono::NaiveDateTime; -use moon_time::now_timestamp; -use serde::{Deserialize, Serialize}; -use std::time::{Duration, Instant}; - -#[derive(Debug, Deserialize, Serialize)] -#[serde(rename_all = "camelCase")] -pub struct Attempt { - pub duration: Option, - - pub exit_code: Option, - - pub finished_at: Option, - - pub index: u8, - - pub started_at: NaiveDateTime, - - #[serde(skip)] - pub start_time: Option, - - pub status: ActionStatus, - - pub stderr: Option, - - pub stdout: Option, -} - -impl Attempt { - pub fn new(index: u8) -> Self { - Attempt { - duration: None, - exit_code: None, - finished_at: None, - index, - started_at: now_timestamp(), - start_time: Some(Instant::now()), - status: ActionStatus::Running, - stderr: None, - stdout: None, - } - } - - pub fn finish(&mut self, status: ActionStatus) { - self.finished_at = Some(now_timestamp()); - self.status = status; - - if let Some(start) = &self.start_time { - self.duration = Some(start.elapsed()); - } - } - - pub fn has_failed(&self) -> bool { - matches!( - &self.status, - ActionStatus::Failed | ActionStatus::FailedAndAbort - ) - } -} diff --git a/nextgen/action/src/lib.rs b/nextgen/action/src/lib.rs index 97eaa02beea..dad9fced748 100644 --- a/nextgen/action/src/lib.rs +++ b/nextgen/action/src/lib.rs @@ -1,7 +1,9 @@ mod action; mod action_node; -mod attempt; +mod operation; +mod operation_list; pub use action::*; pub use action_node::*; -pub use attempt::*; +pub use operation::*; +pub use operation_list::*; diff --git a/nextgen/action/src/operation.rs b/nextgen/action/src/operation.rs new file mode 100644 index 00000000000..b8f6c1c34c5 --- /dev/null +++ b/nextgen/action/src/operation.rs @@ -0,0 +1,165 @@ +use crate::action::ActionStatus; +use moon_time::chrono::NaiveDateTime; +use moon_time::now_timestamp; +use serde::{Deserialize, Serialize}; +use std::mem; +use std::process::Output; +use std::sync::Arc; +use std::time::{Duration, Instant}; + +#[derive(Debug, Default, Deserialize, PartialEq, Serialize)] +#[serde(rename_all = "kebab-case")] +pub enum OperationType { + // Processes + #[default] + NoOperation, + OutputHydration, + TaskExecution, + // Metrics + ArchiveCreation, + HashGeneration, + MutexAcquisition, +} + +#[derive(Debug, Default, Deserialize, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct OperationOutput { + pub exit_code: Option, + + pub stderr: Option>, + + pub stdout: Option>, +} + +impl OperationOutput { + pub fn set_stderr(&mut self, output: String) { + if !output.is_empty() { + self.stderr = Some(Arc::new(output)); + } + } + + pub fn set_stdout(&mut self, output: String) { + if !output.is_empty() { + self.stdout = Some(Arc::new(output)); + } + } +} + +#[derive(Debug, Default, Deserialize, Serialize)] +#[serde(rename_all = "camelCase")] +pub struct Operation { + pub duration: Option, + + pub finished_at: Option, + + pub hash: Option, + + pub output: Option, + + pub started_at: NaiveDateTime, + + #[serde(skip)] + pub start_time: Option, + + pub status: ActionStatus, + + #[serde(rename = "type")] + pub type_of: OperationType, +} + +impl Operation { + pub fn new(type_of: OperationType) -> Self { + Operation { + duration: None, + finished_at: None, + hash: None, + output: None, + started_at: now_timestamp(), + start_time: Some(Instant::now()), + status: ActionStatus::Running, + type_of, + } + } + + pub fn new_finished(type_of: OperationType, status: ActionStatus) -> Self { + let time = now_timestamp(); + + Operation { + duration: None, + output: None, + finished_at: Some(time), + hash: None, + started_at: time, + start_time: None, + status, + type_of, + } + } + + pub fn get_exit_code(&self) -> i32 { + self.output + .as_ref() + .and_then(|exec| exec.exit_code) + .unwrap_or(-1) + } + + pub fn finish(&mut self, status: ActionStatus) { + self.finished_at = Some(now_timestamp()); + self.status = status; + + if let Some(start) = &self.start_time { + self.duration = Some(start.elapsed()); + } + } + + pub fn finish_from_output(&mut self, process_output: &mut Output) { + let mut output = OperationOutput { + exit_code: process_output.status.code(), + ..Default::default() + }; + + output.set_stderr( + String::from_utf8(mem::take(&mut process_output.stderr)).unwrap_or_default(), + ); + + output.set_stdout( + String::from_utf8(mem::take(&mut process_output.stdout)).unwrap_or_default(), + ); + + self.output = Some(output); + + self.finish(if process_output.status.success() { + ActionStatus::Passed + } else { + ActionStatus::Failed + }); + } + + pub fn has_failed(&self) -> bool { + matches!( + &self.status, + ActionStatus::Failed | ActionStatus::FailedAndAbort + ) + } + + pub fn has_passed(&self) -> bool { + matches!( + &self.status, + ActionStatus::Cached | ActionStatus::CachedFromRemote | ActionStatus::Passed + ) + } + + pub fn has_output(&self) -> bool { + self.output.as_ref().is_some_and(|exec| { + exec.stderr.as_ref().is_some_and(|err| !err.is_empty()) + || exec.stdout.as_ref().is_some_and(|out| !out.is_empty()) + }) + } + + pub fn is_cached(&self) -> bool { + matches!( + &self.status, + ActionStatus::Cached | ActionStatus::CachedFromRemote + ) + } +} diff --git a/nextgen/action/src/operation_list.rs b/nextgen/action/src/operation_list.rs new file mode 100644 index 00000000000..8fa5f472ca4 --- /dev/null +++ b/nextgen/action/src/operation_list.rs @@ -0,0 +1,94 @@ +use crate::{action::ActionStatus, operation::*}; +use serde::{Deserialize, Serialize}; +use std::mem; +use std::ops::{Deref, DerefMut}; + +#[derive(Debug, Default, Deserialize, Serialize)] +pub struct OperationList(Vec); + +impl OperationList { + pub fn get_final_status(&self) -> ActionStatus { + self.get_last_process() + .map(|op| op.status) + .unwrap_or(ActionStatus::Invalid) + } + + pub fn get_hash(&self) -> Option<&str> { + self.0 + .iter() + .find(|op| op.hash.is_some()) + .and_then(|op| op.hash.as_deref()) + } + + /// Returns the last "metric based" operation. + pub fn get_last_metric(&self) -> Option<&Operation> { + self.0.iter().rfind(|op| { + matches!( + op.type_of, + OperationType::ArchiveCreation + | OperationType::MutexAcquisition + | OperationType::HashGeneration + ) + }) + } + + /// Returns the last "process based" operation. + pub fn get_last_process(&self) -> Option<&Operation> { + self.0.iter().rfind(|op| { + matches!( + op.type_of, + OperationType::NoOperation + | OperationType::TaskExecution + | OperationType::OutputHydration + ) + }) + } + + /// Returns the last task execution operation. + pub fn get_last_execution(&self) -> Option<&Operation> { + self.0 + .iter() + .rfind(|op| matches!(op.type_of, OperationType::TaskExecution)) + } + + pub fn is_flaky(&self) -> bool { + let mut attempt_count = 0; + let mut any_failed = false; + let mut last_passed = false; + + for operation in &self.0 { + if matches!(operation.type_of, OperationType::TaskExecution) { + attempt_count += 1; + last_passed = operation.has_passed(); + + if operation.has_failed() { + any_failed = true; + } + } + } + + attempt_count > 0 && any_failed && last_passed + } + + pub fn merge(&mut self, other: OperationList) { + self.0.extend(other.0); + } + + pub fn take(&mut self) -> Self { + Self(mem::take(&mut self.0)) + } +} + +impl Deref for OperationList { + type Target = Vec; + + fn deref(&self) -> &Self::Target { + &self.0 + } +} + +impl DerefMut for OperationList { + fn deref_mut(&mut self) -> &mut Self::Target { + &mut self.0 + } +} diff --git a/nextgen/api/Cargo.toml b/nextgen/api/Cargo.toml index df45bde9d91..4655fbf7206 100644 --- a/nextgen/api/Cargo.toml +++ b/nextgen/api/Cargo.toml @@ -16,6 +16,7 @@ graphql_client = { version = "0.13.0", features = ["reqwest-rustls"] } miette = { workspace = true } proto_core = { workspace = true } reqwest = { workspace = true, features = ["json", "multipart", "stream"] } +rustc-hash = { workspace = true } semver = { workspace = true } serde = { workspace = true } starbase_utils = { workspace = true } @@ -26,3 +27,6 @@ tracing = { workspace = true } uuid = { workspace = true } # Rebuild schema: graphql-client introspect-schema http://localhost:8080/graphql --output ./nextgen/api/schema.json + +[lints] +workspace = true diff --git a/nextgen/api/src/moonbase/mod.rs b/nextgen/api/src/moonbase/mod.rs index 1217e701c18..43f461d5673 100644 --- a/nextgen/api/src/moonbase/mod.rs +++ b/nextgen/api/src/moonbase/mod.rs @@ -7,10 +7,13 @@ use crate::moonbase::endpoints::*; use crate::moonbase_error::MoonbaseError; use miette::IntoDiagnostic; use moon_common::color; +use rustc_hash::FxHashMap; use starbase_utils::fs; use std::io; use std::path::Path; -use std::path::PathBuf; +use std::sync::Arc; +use tokio::sync::RwLock; +use tokio::task::JoinHandle; use tokio_util::codec::{BytesCodec, FramedRead}; use tracing::{debug, info, warn}; @@ -20,12 +23,21 @@ pub struct Moonbase { pub ci_insights_enabled: bool, + pub job_id: Option, + #[allow(dead_code)] pub organization_id: i32, pub remote_caching_enabled: bool, pub repository_id: i32, + + download_urls: Arc>>>, + + upload_requests: Arc>>>, + + // Temporary (target -> id) + pub job_ids: Arc>>, } impl Moonbase { @@ -64,9 +76,13 @@ impl Moonbase { Some(Moonbase { auth_token: token, ci_insights_enabled: ci_insights, + job_id: None, organization_id, remote_caching_enabled: remote_caching, repository_id, + download_urls: Arc::new(RwLock::new(FxHashMap::default())), + upload_requests: Arc::new(RwLock::new(vec![])), + job_ids: Arc::new(RwLock::new(FxHashMap::default())), }) } Ok(Response::Failure { message, status }) => { @@ -99,7 +115,14 @@ impl Moonbase { Response::Success(ArtifactResponse { artifact, presigned_url, - }) => Ok(Some((artifact, presigned_url))), + }) => { + self.download_urls + .write() + .await + .insert(artifact.hash.clone(), presigned_url.to_owned()); + + Ok(Some((artifact, presigned_url))) + } Response::Failure { message, status } => { if status == 404 { Ok(None) @@ -135,6 +158,35 @@ impl Moonbase { } } + pub async fn download_artifact_from_remote_storage( + &self, + hash: &str, + dest_path: &Path, + ) -> miette::Result<()> { + if !self.remote_caching_enabled { + return Ok(()); + } + + if let Some(download_url) = self.download_urls.read().await.get(hash) { + debug!( + hash, + archive_file = ?dest_path, + "Downloading archive (artifact) from remote storage", + ); + + if let Err(error) = self.download_artifact(hash, dest_path, download_url).await { + warn!( + hash, + archive_file = ?dest_path, + "Failed to download archive from remote storage: {}", + color::muted_light(error.to_string()), + ); + } + } + + Ok(()) + } + pub async fn download_artifact( &self, hash: &str, @@ -172,14 +224,77 @@ impl Moonbase { .into()) } + pub async fn upload_artifact_to_remote_storage( + &self, + hash: &str, + src_path: &Path, + target_id: &str, + ) -> miette::Result<()> { + if !self.remote_caching_enabled { + return Ok(()); + } + + let size = match fs::metadata(src_path) { + Ok(meta) => meta.len(), + Err(_) => 0, + }; + + debug!( + hash, + archive_file = ?src_path, + size, + "Uploading archive (artifact) to remote storage", + ); + + // Create the database record then upload to cloud storage + let Ok((_, presigned_url)) = self + .write_artifact( + hash, + ArtifactWriteInput { + target: target_id.to_owned(), + size: size as usize, + }, + ) + .await + else { + return Ok(()); + }; + + // Run this in the background so we don't slow down the pipeline + // while waiting for very large archives to upload + let moonbase = self.clone(); + let hash = hash.to_owned(); + let src_path = src_path.to_owned(); + let job_id = self.job_ids.read().await.get(target_id).cloned(); + + self.upload_requests + .write() + .await + .push(tokio::spawn(async move { + if let Err(error) = moonbase + .upload_artifact(&hash, &src_path, presigned_url, job_id) + .await + { + warn!( + hash, + archive_file = ?src_path, + "Failed to upload archive to remote storage: {}", + color::muted_light(error.to_string()), + ); + } + })); + + Ok(()) + } + pub async fn upload_artifact( &self, - hash: String, - path: PathBuf, + hash: &str, + src_path: &Path, upload_url: Option, job_id: Option, ) -> miette::Result<()> { - let file = tokio::fs::File::open(&path).await.into_diagnostic()?; + let file = tokio::fs::File::open(src_path).await.into_diagnostic()?; let file_length = file .metadata() .await @@ -205,11 +320,11 @@ impl Moonbase { let status = response.status(); if status.is_success() { - self.mark_upload_complete(&hash, true, job_id).await?; + self.mark_upload_complete(hash, true, job_id).await?; Ok(()) } else { - self.mark_upload_complete(&hash, false, job_id).await?; + self.mark_upload_complete(hash, false, job_id).await?; Err(MoonbaseError::ArtifactUploadFailure { hash: hash.to_string(), @@ -222,7 +337,7 @@ impl Moonbase { } } Err(error) => { - self.mark_upload_complete(&hash, false, job_id).await?; + self.mark_upload_complete(hash, false, job_id).await?; Err(MoonbaseError::ArtifactUploadFailure { hash: hash.to_string(), @@ -233,6 +348,16 @@ impl Moonbase { } } + pub async fn wait_for_requests(&self) { + let mut requests = self.upload_requests.write().await; + + for future in requests.drain(0..) { + // We can ignore the errors because we handle them in + // the tasks above by logging to the console + let _ = future.await; + } + } + // Once the upload to cloud storage is complete, we need to mark the upload // as completed on our end, whether a success or failure! async fn mark_upload_complete( diff --git a/nextgen/app-components/Cargo.toml b/nextgen/app-components/Cargo.toml index d4d40276dfa..a7b1ebda839 100644 --- a/nextgen/app-components/Cargo.toml +++ b/nextgen/app-components/Cargo.toml @@ -14,3 +14,6 @@ moon_plugin = { path = "../plugin" } proto_core = { workspace = true } semver = { workspace = true } starbase = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/app/Cargo.toml b/nextgen/app/Cargo.toml index 17ebc7a90f1..b223f8986f9 100644 --- a/nextgen/app/Cargo.toml +++ b/nextgen/app/Cargo.toml @@ -27,3 +27,6 @@ starbase_utils = { workspace = true } thiserror = { workspace = true } tokio = { workspace = true } tracing = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/args/Cargo.toml b/nextgen/args/Cargo.toml index 0382eda5cfb..d89ade81dec 100644 --- a/nextgen/args/Cargo.toml +++ b/nextgen/args/Cargo.toml @@ -13,3 +13,6 @@ miette = { workspace = true } shell-words = "1.1.0" thiserror = { workspace = true } winsplit = "0.1.0" + +[lints] +workspace = true diff --git a/nextgen/cache-item/Cargo.toml b/nextgen/cache-item/Cargo.toml index 165fe300ac3..b4e718636d0 100644 --- a/nextgen/cache-item/Cargo.toml +++ b/nextgen/cache-item/Cargo.toml @@ -17,3 +17,6 @@ tracing = { workspace = true } [dev-dependencies] serial_test = "3.0.0" starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/cache/Cargo.toml b/nextgen/cache/Cargo.toml index 5dbc8a14e7c..5ef5f5d41a0 100644 --- a/nextgen/cache/Cargo.toml +++ b/nextgen/cache/Cargo.toml @@ -21,3 +21,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/codegen/Cargo.toml b/nextgen/codegen/Cargo.toml index 3e8e81e0d9e..e7110522ef9 100644 --- a/nextgen/codegen/Cargo.toml +++ b/nextgen/codegen/Cargo.toml @@ -33,3 +33,6 @@ clap = { workspace = true, features = ["string"] } [dev-dependencies] starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/codeowners/Cargo.toml b/nextgen/codeowners/Cargo.toml index a936a7480ea..b31bc6c4e8a 100644 --- a/nextgen/codeowners/Cargo.toml +++ b/nextgen/codeowners/Cargo.toml @@ -18,3 +18,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/common/Cargo.toml b/nextgen/common/Cargo.toml index 4f62f435efb..234bf0bfceb 100644 --- a/nextgen/common/Cargo.toml +++ b/nextgen/common/Cargo.toml @@ -16,3 +16,6 @@ schematic = { workspace = true } serde = { workspace = true } starbase_styles = { workspace = true } thiserror = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/common/src/env.rs b/nextgen/common/src/env.rs index 41c8b5ae962..0472a8896ff 100644 --- a/nextgen/common/src/env.rs +++ b/nextgen/common/src/env.rs @@ -16,7 +16,9 @@ pub fn is_docker_container() -> bool { #[inline] pub fn is_test_env() -> bool { - env::var("MOON_TEST").is_ok() || env::var("STARBASE_TEST").is_ok() + env::var("MOON_TEST").is_ok() + || env::var("STARBASE_TEST").is_ok() + || env::var("NEXTEST").is_ok() } #[inline] diff --git a/nextgen/config/Cargo.toml b/nextgen/config/Cargo.toml index 9cbf1c4571a..d5e56d32cea 100644 --- a/nextgen/config/Cargo.toml +++ b/nextgen/config/Cargo.toml @@ -56,3 +56,6 @@ loader = ["schematic/url"] proto = ["loader", "dep:proto_core"] template = [] tracing = ["dep:tracing"] + +[lints] +workspace = true diff --git a/nextgen/config/src/toolchain_config.rs b/nextgen/config/src/toolchain_config.rs index 6c8b589b7cf..ea3d1ba8a66 100644 --- a/nextgen/config/src/toolchain_config.rs +++ b/nextgen/config/src/toolchain_config.rs @@ -90,6 +90,59 @@ impl ToolchainConfig { tools } + + pub fn get_version_env_vars(&self) -> FxHashMap { + let mut env = FxHashMap::default(); + + let mut inject = |key: &str, version: &UnresolvedVersionSpec| { + env.entry(key.to_owned()) + .or_insert_with(|| version.to_string()); + }; + + if let Some(bun_config) = &self.bun { + if let Some(version) = &bun_config.version { + inject("PROTO_BUN_VERSION", version); + } + } + + if let Some(deno_config) = &self.deno { + if let Some(version) = &deno_config.version { + inject("PROTO_DENO_VERSION", version); + } + } + + if let Some(node_config) = &self.node { + if let Some(version) = &node_config.version { + inject("PROTO_NODE_VERSION", version); + } + + if let Some(version) = &node_config.npm.version { + inject("PROTO_NPM_VERSION", version); + } + + if let Some(pnpm_config) = &node_config.pnpm { + if let Some(version) = &pnpm_config.version { + inject("PROTO_PNPM_VERSION", version); + } + } + + if let Some(yarn_config) = &node_config.yarn { + if let Some(version) = &yarn_config.version { + inject("PROTO_YARN_VERSION", version); + } + } + + if let Some(bunpm_config) = &node_config.bun { + if let Some(version) = &bunpm_config.version { + inject("PROTO_BUN_VERSION", version); + } + } + } + + // We don't include Rust since it's a special case! + + env + } } #[cfg(feature = "proto")] diff --git a/nextgen/console-reporter/Cargo.toml b/nextgen/console-reporter/Cargo.toml new file mode 100644 index 00000000000..383a5714a1f --- /dev/null +++ b/nextgen/console-reporter/Cargo.toml @@ -0,0 +1,21 @@ +[package] +name = "moon_console_reporter" +version = "0.0.1" +edition = "2021" +license = "MIT" +description = "Customizable reporters for console output." +homepage = "https://moonrepo.dev/moon" +repository = "https://github.com/moonrepo/moon" +publish = false + +[dependencies] +moon_action = { path = "../action" } +moon_common = { path = "../common" } +moon_config = { path = "../config" } +moon_console = { path = "../console" } +moon_target = { path = "../target" } +moon_time = { path = "../time" } +miette = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/console-reporter/src/default_reporter.rs b/nextgen/console-reporter/src/default_reporter.rs new file mode 100644 index 00000000000..3a70dc93bba --- /dev/null +++ b/nextgen/console-reporter/src/default_reporter.rs @@ -0,0 +1,448 @@ +use moon_action::{Action, ActionNode, ActionStatus, Operation, OperationList, OperationType}; +use moon_common::color::paint; +use moon_common::{color, is_test_env}; +use moon_config::TaskOutputStyle; +use moon_console::*; +use moon_target::Target; +use moon_time as time; +use std::sync::Arc; + +pub struct DefaultReporter { + err: Arc, + out: Arc, +} + +impl Default for DefaultReporter { + fn default() -> Self { + Self { + err: Arc::new(ConsoleBuffer::empty(ConsoleStream::Stderr)), + out: Arc::new(ConsoleBuffer::empty(ConsoleStream::Stdout)), + } + } +} + +impl DefaultReporter { + fn get_status_meta_comment( + &self, + status: ActionStatus, + fallback: impl Fn() -> Option, + ) -> Option { + match status { + ActionStatus::Cached => Some("cached".into()), + ActionStatus::CachedFromRemote => Some("cached from remote".into()), + ActionStatus::Skipped => Some("skipped".into()), + _ => fallback(), + } + } + + fn get_short_hash(&self, hash: &str) -> String { + hash[0..8].to_owned() + } + + fn print_task_checkpoint( + &self, + target: &Target, + operation: &Operation, + item: &TaskReportItem, + ) -> miette::Result<()> { + let mut comments = vec![]; + + match operation.type_of { + OperationType::NoOperation => { + comments.push("no op".into()); + } + _ => { + let status_comment = self.get_status_meta_comment(operation.status, || { + if item.attempt_current > 1 { + Some(format!( + "attempt {}/{}", + item.attempt_current, item.attempt_total + )) + } else { + None + } + }); + + if let Some(comment) = status_comment { + comments.push(comment); + } + + if let Some(duration) = operation.duration { + if let Some(elapsed) = time::elapsed_opt(duration) { + comments.push(elapsed); + } + } + + // Do not include the hash while testing, as the hash + // constantly changes and breaks our local snapshots + if !is_test_env() { + if let Some(hash) = &item.hash { + comments.push(self.get_short_hash(hash)); + } + } + } + }; + + self.out.print_checkpoint_with_comments( + if operation.has_failed() { + Checkpoint::RunFailed + } else if operation.has_passed() { + Checkpoint::RunPassed + } else { + Checkpoint::RunStarted + }, + target, + comments, + )?; + + Ok(()) + } + + pub fn print_operation_output( + &self, + operation: &Operation, + item: &TaskReportItem, + ) -> miette::Result<()> { + let print_stdout = || -> miette::Result<()> { + if let Some(output) = &operation.output { + if let Some(out) = &output.stdout { + self.out.write_line(out.trim())?; + } + } + + Ok(()) + }; + + let print_stderr = || -> miette::Result<()> { + if let Some(output) = &operation.output { + if let Some(out) = &output.stderr { + self.err.write_line(out.trim())?; + } + } + + Ok(()) + }; + + match item.output_style { + // Only show output on failure + Some(TaskOutputStyle::BufferOnlyFailure) => { + if operation.has_failed() { + print_stdout()?; + print_stderr()?; + } + } + // Only show the hash + Some(TaskOutputStyle::Hash) => { + if let Some(hash) = &item.hash { + // Print to stderr so it can be captured + self.err.write_line(hash)?; + } + } + // Show nothing + Some(TaskOutputStyle::None) => {} + // Show output on both success and failure + _ => { + print_stdout()?; + print_stderr()?; + } + }; + + Ok(()) + } + + fn print_pipeline_failures(&self, actions: &[Action]) -> miette::Result<()> { + for action in actions { + if !action.has_failed() { + continue; + } + + if let Some(attempt) = action.operations.get_last_execution() { + if attempt.has_failed() { + let mut has_stdout = false; + + if let Some(output) = &attempt.output { + if let Some(stdout) = &output.stdout { + if !stdout.is_empty() { + has_stdout = true; + self.out.write_line(stdout.trim())?; + } + } + + if let Some(stderr) = &output.stderr { + if has_stdout { + self.out.write_newline()?; + } + + if !stderr.is_empty() { + self.out.write_line(stderr.trim())?; + } + } + } + } + } + + self.out.print_checkpoint( + Checkpoint::RunFailed, + match &*action.node { + ActionNode::RunTask(inner) => inner.target.as_str(), + _ => &action.label, + }, + )?; + + self.out.write_newline()?; + } + + Ok(()) + } + + fn print_pipeline_stats( + &self, + actions: &[Action], + item: &PipelineReportItem, + ) -> miette::Result<()> { + let mut passed_count = 0; + let mut cached_count = 0; + let mut failed_count = 0; + let mut invalid_count = 0; + let mut skipped_count = 0; + let mut noop_count = 0; + + for action in actions { + if !item.summarize && !matches!(*action.node, ActionNode::RunTask { .. }) { + continue; + } + + match action.status { + ActionStatus::Cached | ActionStatus::CachedFromRemote => { + cached_count += 1; + passed_count += 1; + } + ActionStatus::Passed => { + passed_count += 1; + } + ActionStatus::Failed | ActionStatus::FailedAndAbort => { + failed_count += 1; + } + ActionStatus::Invalid => { + invalid_count += 1; + } + ActionStatus::Skipped => { + skipped_count += 1; + } + ActionStatus::Running => {} + }; + + if let Some(last_op) = action.operations.get_last_process() { + if matches!(last_op.type_of, OperationType::NoOperation) { + noop_count += 1; + } + } + } + + let mut counts_message = vec![]; + + if passed_count > 0 { + if cached_count > 0 { + counts_message.push(format!( + "{} {}", + color::success(format!("{passed_count} completed")), + color::label(format!("({cached_count} cached)")) + )); + } else { + counts_message.push(color::success(format!("{passed_count} completed"))); + } + } + + if failed_count > 0 { + counts_message.push(color::failure(format!("{failed_count} failed"))); + } + + if invalid_count > 0 { + counts_message.push(color::invalid(format!("{invalid_count} invalid"))); + } + + if skipped_count > 0 { + counts_message.push(color::muted_light(format!("{skipped_count} skipped"))); + } + + let counts_message = counts_message.join(&color::muted(", ")); + let mut elapsed_time = time::elapsed(item.duration.unwrap_or_default()); + + if (passed_count - noop_count) == cached_count && failed_count == 0 { + elapsed_time = format!("{} {}", elapsed_time, label_to_the_moon()); + } + + if item.summarize { + self.out.print_entry("Actions", &counts_message)?; + self.out.print_entry(" Time", &elapsed_time)?; + } else { + self.out.print_entry("Tasks", &counts_message)?; + self.out.print_entry(" Time", &elapsed_time)?; + } + + Ok(()) + } + + fn print_pipeline_summary(&self, actions: &[Action]) -> miette::Result<()> { + for action in actions { + let status = match action.status { + ActionStatus::Passed => color::success("pass"), + ActionStatus::Cached | ActionStatus::CachedFromRemote => color::label("pass"), + ActionStatus::Failed | ActionStatus::FailedAndAbort => color::failure("fail"), + ActionStatus::Invalid => color::invalid("warn"), + ActionStatus::Skipped => color::muted_light("skip"), + ActionStatus::Running => color::muted_light("oops"), + }; + + let mut comments: Vec = vec![]; + + if let Some(status_comment) = self.get_status_meta_comment(action.status, || None) { + comments.push(status_comment); + } + + if let Some(duration) = action.duration { + if let Some(elapsed) = time::elapsed_opt(duration) { + comments.push(elapsed); + } + } + + if let Some(hash) = action.operations.get_hash() { + comments.push(self.get_short_hash(hash)); + } + + self.out.write_line(format!( + "{} {} {}", + status, + action.label, + self.out.format_comments(comments), + ))?; + } + + Ok(()) + } +} + +impl Reporter for DefaultReporter { + fn inherit_streams(&mut self, err: Arc, out: Arc) { + self.err = err; + self.out = out; + } + + fn on_pipeline_completed( + &self, + actions: &[Action], + item: &PipelineReportItem, + _error: Option<&miette::Report>, + ) -> miette::Result<()> { + if actions.is_empty() || self.out.is_quiet() { + return Ok(()); + } + + // If no summary, only show stats. This is typically for local! + if !item.summarize { + self.out.write_newline()?; + self.print_pipeline_stats(actions, item)?; + self.out.write_newline()?; + + return Ok(()); + } + + // Otherwise, show all the information we can. + if actions.iter().any(|action| action.has_failed()) { + self.out.print_header("Review")?; + self.print_pipeline_failures(actions)?; + } + + self.out.print_header("Summary")?; + self.print_pipeline_summary(actions)?; + + self.out.print_header("Stats")?; + self.print_pipeline_stats(actions, item)?; + + self.out.write_newline()?; + + Ok(()) + } + + // Print a checkpoint when a task execution starts, for each attempt + fn on_task_started( + &self, + target: &Target, + attempt: &Operation, + item: &TaskReportItem, + ) -> miette::Result<()> { + self.print_task_checkpoint(target, attempt, item)?; + + Ok(()) + } + + // If the task has been running for a long time, print a checkpoint + fn on_task_running(&self, target: &Target, secs: u32) -> miette::Result<()> { + self.out.print_checkpoint_with_comments( + Checkpoint::RunStarted, + target, + [format!("running for {}s", secs)], + )?; + + Ok(()) + } + + // When an attempt has finished, print the output if captured + fn on_task_finished( + &self, + _target: &Target, + attempt: &Operation, + item: &TaskReportItem, + _error: Option<&miette::Report>, + ) -> miette::Result<()> { + // Task output was captured, so there was no output + // sent to the console, so manually print the logs we have! + if !item.output_streamed && attempt.has_output() { + self.print_operation_output(attempt, item)?; + } + + Ok(()) + } + + // When all attempts have completed, print the final checkpoint + fn on_task_completed( + &self, + target: &Target, + operations: &OperationList, + item: &TaskReportItem, + _error: Option<&miette::Report>, + ) -> miette::Result<()> { + if let Some(operation) = operations.get_last_process() { + // If cached, the finished event above is not fired, + // so handle printing the captured logs here! + if operation.is_cached() && operation.has_output() { + self.out.write_newline()?; + self.print_operation_output(operation, item)?; + } + + // Then print the success checkpoint. The success + // checkpoint should always appear after the output, + // and "contain" it within the start checkpoint! + self.print_task_checkpoint(target, operation, item)?; + } else if let Some(operation) = operations.last() { + self.print_task_checkpoint(target, operation, item)?; + } + + Ok(()) + } +} + +fn label_to_the_moon() -> String { + [ + paint(55, "❯"), + paint(56, "❯❯"), + paint(57, "❯ t"), + paint(63, "o t"), + paint(69, "he "), + paint(75, "mo"), + paint(81, "on"), + ] + .into_iter() + .collect::>() + .join("") +} diff --git a/nextgen/console-reporter/src/lib.rs b/nextgen/console-reporter/src/lib.rs new file mode 100644 index 00000000000..7d65c792758 --- /dev/null +++ b/nextgen/console-reporter/src/lib.rs @@ -0,0 +1,3 @@ +mod default_reporter; + +pub use default_reporter::*; diff --git a/nextgen/console/Cargo.toml b/nextgen/console/Cargo.toml index 5f65c24f80d..13add905334 100644 --- a/nextgen/console/Cargo.toml +++ b/nextgen/console/Cargo.toml @@ -9,9 +9,15 @@ repository = "https://github.com/moonrepo/moon" publish = false [dependencies] +moon_action = { path = "../action" } moon_common = { path = "../common" } +moon_config = { path = "../config" } +moon_target = { path = "../target" } inquire = "0.7.4" miette = { workspace = true } parking_lot = "0.12.1" starbase = { workspace = true } starbase_styles = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/console/src/buffer.rs b/nextgen/console/src/buffer.rs new file mode 100644 index 00000000000..3a0a3da8080 --- /dev/null +++ b/nextgen/console/src/buffer.rs @@ -0,0 +1,200 @@ +use miette::IntoDiagnostic; +use parking_lot::Mutex; +use std::io::{self, IsTerminal, Write}; +use std::mem; +use std::sync::atomic::{AtomicBool, Ordering}; +use std::sync::mpsc::{self, Receiver, Sender, TryRecvError}; +use std::sync::Arc; +use std::thread::{sleep, spawn, JoinHandle}; +use std::time::Duration; + +#[derive(Clone, Copy)] +pub enum ConsoleStream { + Stderr, + Stdout, +} + +pub struct ConsoleBuffer { + buffer: Arc>>, + channel: Option>, + stream: ConsoleStream, + + pub(crate) handle: Option>, + pub(crate) quiet: Option>, + pub(crate) test_mode: bool, +} + +impl ConsoleBuffer { + fn internal_new(stream: ConsoleStream, with_handle: bool) -> Self { + let buffer = Arc::new(Mutex::new(Vec::new())); + let buffer_clone = Arc::clone(&buffer); + let (tx, rx) = mpsc::channel(); + + // Every 100ms, flush the buffer + let handle = if with_handle { + Some(spawn(move || flush_on_loop(buffer_clone, stream, rx))) + } else { + None + }; + + Self { + buffer, + channel: Some(tx), + handle, + stream, + quiet: None, + test_mode: false, + } + } + + pub fn new(stream: ConsoleStream) -> Self { + Self::internal_new(stream, true) + } + + pub fn new_testing(stream: ConsoleStream) -> Self { + let mut console = Self::internal_new(stream, false); + console.test_mode = true; + console + } + + pub fn empty(stream: ConsoleStream) -> Self { + Self { + buffer: Arc::new(Mutex::new(Vec::new())), + channel: None, + stream, + handle: None, + quiet: None, + test_mode: false, + } + } + + pub fn is_terminal(&self) -> bool { + match self.stream { + ConsoleStream::Stderr => io::stderr().is_terminal(), + ConsoleStream::Stdout => io::stdout().is_terminal(), + } + } + + pub fn is_quiet(&self) -> bool { + self.quiet + .as_ref() + .is_some_and(|quiet| quiet.load(Ordering::Relaxed)) + } + + pub fn close(&self) -> miette::Result<()> { + self.flush()?; + + // Send the closed message + if let Some(channel) = &self.channel { + let _ = channel.send(true); + } + + Ok(()) + } + + pub fn flush(&self) -> miette::Result<()> { + flush(&mut self.buffer.lock(), self.stream).into_diagnostic()?; + + Ok(()) + } + + pub fn write_raw)>(&self, mut op: F) -> miette::Result<()> { + // When testing just flush immediately + if self.test_mode { + let mut buffer = Vec::new(); + + op(&mut buffer); + + flush(&mut buffer, self.stream).into_diagnostic()?; + } + // Otherwise just write to the buffer and flush + // when its length grows too large + else { + let mut buffer = self.buffer.lock(); + + op(&mut buffer); + + if buffer.len() >= 1024 { + flush(&mut buffer, self.stream).into_diagnostic()?; + } + } + + Ok(()) + } + + pub fn write>(&self, data: T) -> miette::Result<()> { + let data = data.as_ref(); + + if data.is_empty() { + return Ok(()); + } + + self.write_raw(|buffer| buffer.extend_from_slice(data)) + } + + pub fn write_line>(&self, data: T) -> miette::Result<()> { + let data = data.as_ref(); + + if data.is_empty() { + return Ok(()); + } + + self.write_raw(|buffer| { + buffer.extend_from_slice(data); + buffer.push(b'\n'); + }) + } + + pub fn write_newline(&self) -> miette::Result<()> { + self.write("\n") + } +} + +impl Drop for ConsoleBuffer { + fn drop(&mut self) { + self.close().unwrap(); + } +} + +impl Clone for ConsoleBuffer { + fn clone(&self) -> Self { + Self { + buffer: Arc::clone(&self.buffer), + stream: self.stream, + quiet: self.quiet.clone(), + test_mode: self.test_mode, + // Ignore for clones + channel: None, + handle: None, + } + } +} + +fn flush(buffer: &mut Vec, stream: ConsoleStream) -> io::Result<()> { + if buffer.is_empty() { + return Ok(()); + } + + let data = mem::take(buffer); + + match stream { + ConsoleStream::Stderr => io::stderr().lock().write_all(&data), + ConsoleStream::Stdout => io::stdout().lock().write_all(&data), + } +} + +fn flush_on_loop(buffer: Arc>>, stream: ConsoleStream, receiver: Receiver) { + loop { + sleep(Duration::from_millis(100)); + + let _ = flush(&mut buffer.lock(), stream); + + // Has the thread been closed? + match receiver.try_recv() { + Ok(true) | Err(TryRecvError::Disconnected) => { + break; + } + _ => {} + } + } +} diff --git a/nextgen/console/src/console.rs b/nextgen/console/src/console.rs index da944f11b4f..b2c1221f646 100644 --- a/nextgen/console/src/console.rs +++ b/nextgen/console/src/console.rs @@ -1,193 +1,27 @@ +use crate::buffer::*; use crate::prompts::create_theme; +use crate::reporter::*; use inquire::ui::RenderConfig; -use miette::IntoDiagnostic; use moon_common::is_formatted_output; -use parking_lot::Mutex; use starbase::Resource; -use std::io::{self, IsTerminal, Write}; -use std::mem; use std::sync::atomic::{AtomicBool, Ordering}; -use std::sync::mpsc::{self, Receiver, Sender, TryRecvError}; use std::sync::Arc; -use std::thread::{sleep, spawn, JoinHandle}; -use std::time::Duration; +use std::thread::JoinHandle; -#[derive(Clone, Copy)] -pub enum ConsoleStream { - Stderr, - Stdout, -} - -pub struct ConsoleBuffer { - buffer: Arc>>, - closed: bool, - channel: Option>, - handle: Option>, - stream: ConsoleStream, - - quiet: Option>, - test_mode: bool, -} - -impl ConsoleBuffer { - fn internal_new(stream: ConsoleStream, with_handle: bool) -> Self { - let buffer = Arc::new(Mutex::new(Vec::new())); - let buffer_clone = Arc::clone(&buffer); - let (tx, rx) = mpsc::channel(); - - // Every 100ms, flush the buffer - let handle = if with_handle { - Some(spawn(move || flush_on_loop(buffer_clone, stream, rx))) - } else { - None - }; - - Self { - buffer, - closed: false, - channel: Some(tx), - handle, - stream, - quiet: None, - test_mode: false, - } - } - - pub fn new(stream: ConsoleStream) -> Self { - Self::internal_new(stream, true) - } - - pub fn new_testing(stream: ConsoleStream) -> Self { - let mut console = Self::internal_new(stream, false); - console.test_mode = true; - console - } - - pub fn is_terminal(&self) -> bool { - match self.stream { - ConsoleStream::Stderr => io::stderr().is_terminal(), - ConsoleStream::Stdout => io::stdout().is_terminal(), - } - } - - pub fn is_quiet(&self) -> bool { - self.quiet - .as_ref() - .is_some_and(|quiet| quiet.load(Ordering::Relaxed)) - } - - pub fn close(&mut self) -> miette::Result<()> { - self.flush()?; - - self.closed = true; - - // Send the closed message - if let Some(channel) = self.channel.take() { - let _ = channel.send(true); - } - - // Attempt to close the thread - if let Some(handle) = self.handle.take() { - let _ = handle.join(); - } - - Ok(()) - } - - pub fn flush(&self) -> miette::Result<()> { - if self.closed { - return Ok(()); - } - - flush(&mut self.buffer.lock(), self.stream).into_diagnostic()?; - - Ok(()) - } - - pub fn write_raw)>(&self, mut op: F) -> miette::Result<()> { - if self.closed { - return Ok(()); - } - - // When testing just flush immediately - if self.test_mode { - let mut buffer = Vec::new(); - - op(&mut buffer); - - flush(&mut buffer, self.stream).into_diagnostic()?; - } - // Otherwise just write to the buffer and flush - // when its length grows too large - else { - let mut buffer = self.buffer.lock(); - - op(&mut buffer); - - if buffer.len() >= 1024 { - flush(&mut buffer, self.stream).into_diagnostic()?; - } - } - - Ok(()) - } - - pub fn write>(&self, data: T) -> miette::Result<()> { - let data = data.as_ref(); +pub type ConsoleTheme = RenderConfig<'static>; - if data.is_empty() { - return Ok(()); - } - - self.write_raw(|buffer| buffer.extend_from_slice(data)) - } - - pub fn write_line>(&self, data: T) -> miette::Result<()> { - let data = data.as_ref(); - - if data.is_empty() { - return Ok(()); - } - - self.write_raw(|buffer| { - buffer.extend_from_slice(data); - buffer.push(b'\n'); - }) - } - - pub fn write_newline(&self) -> miette::Result<()> { - self.write("\n") - } -} - -impl Drop for ConsoleBuffer { - fn drop(&mut self) { - self.close().unwrap(); - } -} +#[derive(Resource)] +pub struct Console { + pub err: Arc, + err_handle: Option>, -impl Clone for ConsoleBuffer { - fn clone(&self) -> Self { - Self { - buffer: Arc::clone(&self.buffer), - closed: self.closed, - stream: self.stream, - quiet: self.quiet.clone(), - test_mode: self.test_mode, - // Ignore for clones - channel: None, - handle: None, - } - } -} + pub out: Arc, + out_handle: Option>, -#[derive(Clone, Resource)] -pub struct Console { - pub err: ConsoleBuffer, - pub out: ConsoleBuffer, + pub reporter: Arc, quiet: Arc, - theme: Arc>, + theme: Arc, } impl Console { @@ -201,19 +35,25 @@ impl Console { out.quiet = Some(Arc::clone(&quiet)); Self { - err, - out, + err_handle: err.handle.take(), + err: Arc::new(err), + out_handle: out.handle.take(), + out: Arc::new(out), quiet, + reporter: Arc::new(Box::new(EmptyReporter)), theme: Arc::new(create_theme()), } } pub fn new_testing() -> Self { Self { - err: ConsoleBuffer::new_testing(ConsoleStream::Stderr), - out: ConsoleBuffer::new_testing(ConsoleStream::Stdout), + err: Arc::new(ConsoleBuffer::new_testing(ConsoleStream::Stderr)), + err_handle: None, + out: Arc::new(ConsoleBuffer::new_testing(ConsoleStream::Stdout)), + out_handle: None, quiet: Arc::new(AtomicBool::new(false)), - theme: Arc::new(RenderConfig::empty()), + reporter: Arc::new(Box::new(EmptyReporter)), + theme: Arc::new(ConsoleTheme::empty()), } } @@ -221,6 +61,14 @@ impl Console { self.err.close()?; self.out.close()?; + if let Some(handle) = self.err_handle.take() { + let _ = handle.join(); + } + + if let Some(handle) = self.out_handle.take() { + let _ = handle.join(); + } + Ok(()) } @@ -228,50 +76,42 @@ impl Console { self.quiet.store(true, Ordering::Release); } - pub fn stderr(&self) -> &ConsoleBuffer { - &self.err + pub fn stderr(&self) -> Arc { + Arc::clone(&self.err) } - pub fn stdout(&self) -> &ConsoleBuffer { - &self.out + pub fn stdout(&self) -> Arc { + Arc::clone(&self.out) } - pub fn theme(&self) -> Arc> { + pub fn theme(&self) -> Arc { Arc::clone(&self.theme) } -} -impl Drop for Console { - fn drop(&mut self) { - self.close().unwrap(); - } -} + pub fn set_reporter(&mut self, mut reporter: impl Reporter + 'static) { + reporter.inherit_streams(self.stderr(), self.stdout()); + reporter.inherit_theme(self.theme()); -fn flush(buffer: &mut Vec, stream: ConsoleStream) -> io::Result<()> { - if buffer.is_empty() { - return Ok(()); + self.reporter = Arc::new(Box::new(reporter)); } +} - let data = mem::take(buffer); - - match stream { - ConsoleStream::Stderr => io::stderr().lock().write_all(&data), - ConsoleStream::Stdout => io::stdout().lock().write_all(&data), +impl Clone for Console { + fn clone(&self) -> Self { + Self { + err: self.err.clone(), + err_handle: None, + out: self.out.clone(), + out_handle: None, + quiet: self.quiet.clone(), + reporter: self.reporter.clone(), + theme: self.theme.clone(), + } } } -fn flush_on_loop(buffer: Arc>>, stream: ConsoleStream, receiver: Receiver) { - loop { - sleep(Duration::from_millis(100)); - - let _ = flush(&mut buffer.lock(), stream); - - // Has the thread been closed? - match receiver.try_recv() { - Ok(true) | Err(TryRecvError::Disconnected) => { - break; - } - _ => {} - } +impl Drop for Console { + fn drop(&mut self) { + self.close().unwrap(); } } diff --git a/nextgen/console/src/lib.rs b/nextgen/console/src/lib.rs index efbea06f94a..6727a80635a 100644 --- a/nextgen/console/src/lib.rs +++ b/nextgen/console/src/lib.rs @@ -1,6 +1,10 @@ +mod buffer; mod console; mod printer; pub mod prompts; +mod reporter; +pub use buffer::*; pub use console::*; pub use printer::*; +pub use reporter::*; diff --git a/nextgen/console/src/printer.rs b/nextgen/console/src/printer.rs index 2b28f38d5f2..55081bd6d1a 100644 --- a/nextgen/console/src/printer.rs +++ b/nextgen/console/src/printer.rs @@ -1,4 +1,4 @@ -use crate::console::ConsoleBuffer; +use crate::buffer::ConsoleBuffer; use starbase_styles::color::owo::{OwoColorize, XtermColors}; use starbase_styles::color::{self, no_color, Color, OwoStyle}; @@ -9,6 +9,7 @@ const MUTED_COLORS: [u8; 4] = [240, 242, 244, 246]; const SETUP_COLORS: [u8; 4] = [198, 205, 212, 219]; const ANNOUNCEMENT_COLORS: [u8; 4] = [35, 42, 49, 86]; +#[derive(Clone, Copy)] pub enum Checkpoint { Announcement, RunFailed, diff --git a/nextgen/console/src/reporter.rs b/nextgen/console/src/reporter.rs new file mode 100644 index 00000000000..bcbd762ea07 --- /dev/null +++ b/nextgen/console/src/reporter.rs @@ -0,0 +1,98 @@ +use crate::buffer::ConsoleBuffer; +use crate::console::ConsoleTheme; +use miette::Error as Report; +use moon_action::{Action, ActionNode, Operation, OperationList}; +use moon_config::TaskOutputStyle; +use moon_target::Target; +use std::sync::Arc; +use std::time::Duration; + +#[derive(Debug, Default)] +pub struct PipelineReportItem { + pub duration: Option, + pub summarize: bool, +} + +#[derive(Debug, Default)] +pub struct TaskReportItem { + pub attempt_current: u8, + pub attempt_total: u8, + pub hash: Option, + pub output_streamed: bool, + pub output_style: Option, +} + +pub trait Reporter: Send + Sync { + fn inherit_streams(&mut self, _err: Arc, _out: Arc) {} + + fn inherit_theme(&mut self, _theme: Arc) {} + + fn on_pipeline_started(&self, _nodes: &[&ActionNode]) -> miette::Result<()> { + Ok(()) + } + + fn on_pipeline_completed( + &self, + _actions: &[Action], + _item: &PipelineReportItem, + _error: Option<&Report>, + ) -> miette::Result<()> { + Ok(()) + } + + fn on_pipeline_aborted( + &self, + _actions: &[Action], + _item: &PipelineReportItem, + _error: Option<&Report>, + ) -> miette::Result<()> { + Ok(()) + } + + fn on_action_started(&self, _action: &Action) -> miette::Result<()> { + Ok(()) + } + + fn on_action_completed(&self, _action: &Action, _error: Option<&Report>) -> miette::Result<()> { + Ok(()) + } + + fn on_task_started( + &self, + _target: &Target, + _attempt: &Operation, + _item: &TaskReportItem, + ) -> miette::Result<()> { + Ok(()) + } + + fn on_task_running(&self, _target: &Target, _secs: u32) -> miette::Result<()> { + Ok(()) + } + + fn on_task_finished( + &self, + _target: &Target, + _attempt: &Operation, + _item: &TaskReportItem, + _error: Option<&Report>, + ) -> miette::Result<()> { + Ok(()) + } + + fn on_task_completed( + &self, + _target: &Target, + _operations: &OperationList, + _item: &TaskReportItem, + _error: Option<&Report>, + ) -> miette::Result<()> { + Ok(()) + } +} + +pub type BoxedReporter = Box; + +pub struct EmptyReporter; + +impl Reporter for EmptyReporter {} diff --git a/nextgen/env/Cargo.toml b/nextgen/env/Cargo.toml index 0fa205df976..438dbc422ed 100644 --- a/nextgen/env/Cargo.toml +++ b/nextgen/env/Cargo.toml @@ -13,3 +13,6 @@ moon_common = { path = "../common" } dirs = { workspace = true } miette = { workspace = true } tracing = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/extension-plugin/Cargo.toml b/nextgen/extension-plugin/Cargo.toml index 05fd90c5ecf..4117ef7bf02 100644 --- a/nextgen/extension-plugin/Cargo.toml +++ b/nextgen/extension-plugin/Cargo.toml @@ -12,3 +12,6 @@ publish = false moon_pdk_api = { path = "../pdk-api" } moon_plugin = { path = "../plugin" } miette = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/file-group/Cargo.toml b/nextgen/file-group/Cargo.toml index 035e66b3409..28af1bc6627 100644 --- a/nextgen/file-group/Cargo.toml +++ b/nextgen/file-group/Cargo.toml @@ -22,3 +22,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/hash/Cargo.toml b/nextgen/hash/Cargo.toml index f82517af574..05ac0ef252d 100644 --- a/nextgen/hash/Cargo.toml +++ b/nextgen/hash/Cargo.toml @@ -20,3 +20,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/pdk-api/Cargo.toml b/nextgen/pdk-api/Cargo.toml index 737aac15102..8500ea83285 100644 --- a/nextgen/pdk-api/Cargo.toml +++ b/nextgen/pdk-api/Cargo.toml @@ -13,3 +13,6 @@ moon_config = { version = "0.0.7", path = "../config" } rustc-hash = { workspace = true } serde = { workspace = true } warpgate_api = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/pdk-test-utils/Cargo.toml b/nextgen/pdk-test-utils/Cargo.toml index a0679518fa5..91a6d8371a1 100644 --- a/nextgen/pdk-test-utils/Cargo.toml +++ b/nextgen/pdk-test-utils/Cargo.toml @@ -13,3 +13,6 @@ warpgate = { workspace = true } extism = { workspace = true } serde = { workspace = true } serde_json = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/pdk/Cargo.toml b/nextgen/pdk/Cargo.toml index 95495e35024..bbac14e3264 100644 --- a/nextgen/pdk/Cargo.toml +++ b/nextgen/pdk/Cargo.toml @@ -13,3 +13,6 @@ clap = { workspace = true, features = ["derive"] } extism-pdk = { workspace = true } serde = { workspace = true } warpgate_pdk = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/pipeline/Cargo.toml b/nextgen/pipeline/Cargo.toml index 55a7ef8326e..c2e97554cd0 100644 --- a/nextgen/pipeline/Cargo.toml +++ b/nextgen/pipeline/Cargo.toml @@ -22,3 +22,6 @@ tracing = { workspace = true } [dev-dependencies] rand = "0.8.5" + +[lints] +workspace = true diff --git a/nextgen/platform-detector/Cargo.toml b/nextgen/platform-detector/Cargo.toml index 6566c9ac234..6ff77066ba8 100644 --- a/nextgen/platform-detector/Cargo.toml +++ b/nextgen/platform-detector/Cargo.toml @@ -8,3 +8,6 @@ publish = false moon_config = { path = "../config" } once_cell = { workspace = true } regex = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/platform-plugin/Cargo.toml b/nextgen/platform-plugin/Cargo.toml index 27d64195ffa..0d238b783b0 100644 --- a/nextgen/platform-plugin/Cargo.toml +++ b/nextgen/platform-plugin/Cargo.toml @@ -18,3 +18,6 @@ proto_core = { workspace = true } rustc-hash = { workspace = true } starbase = { workspace = true } tracing = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/platform-runtime/Cargo.toml b/nextgen/platform-runtime/Cargo.toml index dd4f05cba87..4a10c6ccb43 100644 --- a/nextgen/platform-runtime/Cargo.toml +++ b/nextgen/platform-runtime/Cargo.toml @@ -7,3 +7,6 @@ publish = false [dependencies] moon_config = { path = "../config" } serde = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/plugin/Cargo.toml b/nextgen/plugin/Cargo.toml index 8b858c67c75..615409311e7 100644 --- a/nextgen/plugin/Cargo.toml +++ b/nextgen/plugin/Cargo.toml @@ -26,3 +26,6 @@ warpgate = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/process/Cargo.toml b/nextgen/process/Cargo.toml index 0a6ab8f3da4..93283de7ad6 100644 --- a/nextgen/process/Cargo.toml +++ b/nextgen/process/Cargo.toml @@ -20,3 +20,6 @@ system_env = { workspace = true } thiserror = { workspace = true } tracing = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/project-builder/Cargo.toml b/nextgen/project-builder/Cargo.toml index 7fcefe91666..511545ee8f5 100644 --- a/nextgen/project-builder/Cargo.toml +++ b/nextgen/project-builder/Cargo.toml @@ -23,3 +23,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/project-constraints/Cargo.toml b/nextgen/project-constraints/Cargo.toml index 008f478846d..5f4d65cd05c 100644 --- a/nextgen/project-constraints/Cargo.toml +++ b/nextgen/project-constraints/Cargo.toml @@ -14,3 +14,6 @@ moon_config = { path = "../config" } moon_project = { path = "../project" } miette = { workspace = true } thiserror = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/project-expander/Cargo.toml b/nextgen/project-expander/Cargo.toml index 31b4d87feab..84aeee1a86d 100644 --- a/nextgen/project-expander/Cargo.toml +++ b/nextgen/project-expander/Cargo.toml @@ -26,3 +26,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/project-graph/Cargo.toml b/nextgen/project-graph/Cargo.toml index 5ad1c41971b..a3f75fe9a4c 100644 --- a/nextgen/project-graph/Cargo.toml +++ b/nextgen/project-graph/Cargo.toml @@ -35,3 +35,6 @@ tracing = { workspace = true } moon_test_utils2 = { path = "../test-utils" } starbase_sandbox = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/project/Cargo.toml b/nextgen/project/Cargo.toml index 7062a380107..d484b780830 100644 --- a/nextgen/project/Cargo.toml +++ b/nextgen/project/Cargo.toml @@ -18,3 +18,6 @@ miette = { workspace = true } rustc-hash = { workspace = true } serde = { workspace = true } thiserror = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/query/Cargo.toml b/nextgen/query/Cargo.toml index 92bf95a65c3..4a79e17d99d 100644 --- a/nextgen/query/Cargo.toml +++ b/nextgen/query/Cargo.toml @@ -16,3 +16,6 @@ pest = "2.7.9" pest_derive = "2.7.9" starbase_utils = { workspace = true } thiserror = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/target/Cargo.toml b/nextgen/target/Cargo.toml index 51ef48dd424..dfaf781f728 100644 --- a/nextgen/target/Cargo.toml +++ b/nextgen/target/Cargo.toml @@ -15,3 +15,6 @@ regex = { workspace = true } schematic = { workspace = true } serde = { workspace = true } thiserror = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/task-builder/Cargo.toml b/nextgen/task-builder/Cargo.toml index f0959b1dd2d..4d26514021a 100644 --- a/nextgen/task-builder/Cargo.toml +++ b/nextgen/task-builder/Cargo.toml @@ -22,3 +22,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/task-hasher/Cargo.toml b/nextgen/task-hasher/Cargo.toml index 4455a5e9afe..41934e77e9e 100644 --- a/nextgen/task-hasher/Cargo.toml +++ b/nextgen/task-hasher/Cargo.toml @@ -25,3 +25,6 @@ tracing = { workspace = true } moon_test_utils2 = { path = "../test-utils" } starbase_sandbox = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/task-hasher/src/task_hash.rs b/nextgen/task-hasher/src/task_hash.rs index 0c74528dacc..1bade7cb4a3 100644 --- a/nextgen/task-hasher/src/task_hash.rs +++ b/nextgen/task-hasher/src/task_hash.rs @@ -15,7 +15,7 @@ hash_content!( pub args: Vec<&'task str>, // Task `deps` mapped to their hash - pub deps: BTreeMap<&'task Target, String>, // &'task str>, + pub deps: BTreeMap<&'task Target, String>, // Environment variables pub env: BTreeMap<&'task str, &'task str>, @@ -60,7 +60,9 @@ impl<'task> TaskHash<'task> { platform: &task.platform, project_deps: project.get_dependency_ids(), target: &task.target, - version: "1".into(), + // 1 - Original implementation + // 2 - New task runner crate, tarball structure changed + version: "2".into(), } } } diff --git a/nextgen/task-hasher/tests/__fixtures__/inputs/moon.yml b/nextgen/task-hasher/tests/__fixtures__/inputs/moon.yml index 618ec2f23b3..8e4131ff9d4 100644 --- a/nextgen/task-hasher/tests/__fixtures__/inputs/moon.yml +++ b/nextgen/task-hasher/tests/__fixtures__/inputs/moon.yml @@ -35,3 +35,13 @@ tasks: command: noop inputs: - 'dir/{a,xy}z.txt' + + envFile: + command: noop + options: + envFile: true + + envFileList: + command: noop + options: + envFile: ['.env.prod', '.env.local'] diff --git a/nextgen/task-hasher/tests/task_hasher_test.rs b/nextgen/task-hasher/tests/task_hasher_test.rs index a7dcd588b1b..dbd44c7570e 100644 --- a/nextgen/task-hasher/tests/task_hasher_test.rs +++ b/nextgen/task-hasher/tests/task_hasher_test.rs @@ -263,6 +263,57 @@ mod task_hasher { assert_eq!(result.inputs.keys().collect::>(), ["created.txt"]); } + + #[tokio::test] + async fn includes_env_file() { + let sandbox = create_sandbox("inputs"); + sandbox.enable_git(); + sandbox.create_file(".env", ""); + + let (project_graph, vcs) = generate_project_graph(sandbox.path()).await; + let (vcs_config, glob_config) = create_hasher_configs(); + let project = project_graph.get("root").unwrap(); + + let expected = [".env"]; + + // VCS + let result = + generate_hash(&project, "envFile", &vcs, sandbox.path(), &vcs_config).await; + + assert_eq!(result.inputs.keys().collect::>(), expected); + + // Glob + let result = + generate_hash(&project, "envFile", &vcs, sandbox.path(), &glob_config).await; + + assert_eq!(result.inputs.keys().collect::>(), expected); + } + + #[tokio::test] + async fn includes_custom_env_files() { + let sandbox = create_sandbox("inputs"); + sandbox.enable_git(); + sandbox.create_file(".env.prod", ""); + sandbox.create_file(".env.local", ""); + + let (project_graph, vcs) = generate_project_graph(sandbox.path()).await; + let (vcs_config, glob_config) = create_hasher_configs(); + let project = project_graph.get("root").unwrap(); + + let expected = [".env.local", ".env.prod"]; + + // VCS + let result = + generate_hash(&project, "envFileList", &vcs, sandbox.path(), &vcs_config).await; + + assert_eq!(result.inputs.keys().collect::>(), expected); + + // Glob + let result = + generate_hash(&project, "envFileList", &vcs, sandbox.path(), &glob_config).await; + + assert_eq!(result.inputs.keys().collect::>(), expected); + } } mod output_filtering { diff --git a/nextgen/task-runner/Cargo.toml b/nextgen/task-runner/Cargo.toml new file mode 100644 index 00000000000..4a0d8b241d1 --- /dev/null +++ b/nextgen/task-runner/Cargo.toml @@ -0,0 +1,43 @@ +[package] +name = "moon_task_runner" +version = "0.0.1" +edition = "2021" +license = "MIT" +description = "System for running tasks." +homepage = "https://moonrepo.dev/moon" +repository = "https://github.com/moonrepo/moon" +publish = false + +[dependencies] +moon_api = { path = "../api" } +moon_action = { path = "../action" } +moon_action_context = { path = "../action-context" } +moon_cache = { path = "../cache" } +moon_cache_item = { path = "../cache-item" } +moon_common = { path = "../common" } +moon_config = { path = "../config" } +moon_console = { path = "../console" } +# TODO remove +moon_platform = { path = "../../crates/core/platform" } +moon_platform_runtime = { path = "../platform-runtime" } +moon_process = { path = "../process" } +moon_project = { path = "../project" } +moon_task = { path = "../task" } +moon_task_hasher = { path = "../task-hasher" } +moon_time = { path = "../time" } +moon_workspace = { path = "../workspace" } +miette = { workspace = true } +serde = { workspace = true } +starbase_archive = { workspace = true } +starbase_utils = { workspace = true } +thiserror = { workspace = true } +tokio = { workspace = true } +tracing = { workspace = true } + +[dev-dependencies] +moon_test_utils2 = { path = "../test-utils" } +proto_core = { workspace = true } +starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/task-runner/src/command_builder.rs b/nextgen/task-runner/src/command_builder.rs new file mode 100644 index 00000000000..f3f8512de3e --- /dev/null +++ b/nextgen/task-runner/src/command_builder.rs @@ -0,0 +1,262 @@ +use moon_action::ActionNode; +use moon_action_context::ActionContext; +use moon_common::consts::PROTO_CLI_VERSION; +use moon_config::TaskOptionAffectedFiles; +use moon_platform::PlatformManager; +use moon_process::{Command, Shell}; +use moon_project::Project; +use moon_task::Task; +use moon_workspace::Workspace; +use std::path::Path; +use tracing::{debug, trace}; + +pub struct CommandBuilder<'task> { + node: &'task ActionNode, + project: &'task Project, + task: &'task Task, + working_dir: &'task Path, + workspace: &'task Workspace, + platform_manager: &'task PlatformManager, + + // To be built + command: Command, +} + +impl<'task> CommandBuilder<'task> { + pub fn new( + workspace: &'task Workspace, + project: &'task Project, + task: &'task Task, + node: &'task ActionNode, + ) -> Self { + let working_dir = if task.options.run_from_workspace_root { + &workspace.root + } else { + &project.root + }; + + Self { + node, + project, + task, + working_dir, + workspace, + platform_manager: PlatformManager::read(), + command: Command::new("noop"), + } + } + + pub fn set_platform_manager(&mut self, manager: &'task PlatformManager) { + self.platform_manager = manager; + } + + pub async fn build(mut self, context: &ActionContext) -> miette::Result { + self.command = self + .platform_manager + .get(self.task.platform)? + .create_run_target_command( + context, + self.project, + self.task, + self.node.get_runtime(), + self.working_dir, + ) + .await?; + + debug!( + task = self.task.target.as_str(), + command = self.command.bin.to_str(), + working_dir = ?self.working_dir, + "Creating task command to execute", + ); + + // We need to handle non-zero exit code's manually + self.command + .cwd(self.working_dir) + .set_error_on_nonzero(false); + + // Order is important! + self.inject_args(context); + self.inject_env(); + self.inject_shell(); + self.inherit_affected(context)?; + self.inherit_config(); + + Ok(self.command) + } + + fn inject_args(&mut self, context: &ActionContext) { + // Must be first! + if let ActionNode::RunTask(inner) = &self.node { + if !inner.args.is_empty() { + trace!( + task = self.task.target.as_str(), + args = ?inner.args, + "Inheriting args from dependent task" + ); + + self.command.args(&inner.args); + } + } + + if context.should_inherit_args(&self.task.target) && !context.passthrough_args.is_empty() { + trace!( + task = self.task.target.as_str(), + args = ?context.passthrough_args, + "Inheriting args passed through the command line" + ); + + self.command.args(&context.passthrough_args); + } + } + + fn inject_env(&mut self) { + // Must be first! + if let ActionNode::RunTask(inner) = &self.node { + if !inner.env.is_empty() { + trace!( + task = self.task.target.as_str(), + env = ?inner.env, + "Inheriting env from dependent task" + ); + + self.command.envs(&inner.env); + } + } + + self.command.env("PWD", self.working_dir); + + // moon + self.command + .env("MOON_CACHE_DIR", &self.workspace.cache_engine.cache_dir); + self.command + .env("MOON_PROJECT_ID", self.project.id.as_str()); + self.command.env("MOON_PROJECT_ROOT", &self.project.root); + self.command + .env("MOON_PROJECT_SOURCE", self.project.source.as_str()); + self.command.env("MOON_TARGET", &self.task.target.id); + self.command + .env("MOON_WORKSPACE_ROOT", &self.workspace.root); + self.command + .env("MOON_WORKING_DIR", &self.workspace.working_dir); + self.command.env( + "MOON_PROJECT_SNAPSHOT", + self.workspace + .cache_engine + .state + .get_project_snapshot_path(&self.project.id), + ); + + // proto + self.command.env("PROTO_IGNORE_MIGRATE_WARNING", "true"); + self.command.env("PROTO_NO_PROGRESS", "true"); + self.command.env("PROTO_VERSION", PROTO_CLI_VERSION); + + for (key, value) in self.workspace.toolchain_config.get_version_env_vars() { + // Don't overwrite proto version variables inherited from platforms + self.command.env_if_missing(key, value); + } + } + + fn inject_shell(&mut self) { + if self.task.options.shell == Some(true) { + // Process command set's a shell by default! + + #[cfg(unix)] + if let Some(shell) = &self.task.options.unix_shell { + use moon_config::TaskUnixShell; + + self.command.with_shell(match shell { + TaskUnixShell::Bash => Shell::new("bash"), + TaskUnixShell::Elvish => Shell::new("elvish"), + TaskUnixShell::Fish => Shell::new("fish"), + TaskUnixShell::Zsh => Shell::new("zsh"), + }); + } + + #[cfg(windows)] + if let Some(shell) = &self.task.options.windows_shell { + use moon_config::TaskWindowsShell; + + self.command.with_shell(match shell { + TaskWindowsShell::Bash => Shell::new("bash"), + TaskWindowsShell::Pwsh => Shell::new("pwsh"), + }); + } + } else { + self.command.without_shell(); + } + } + + fn inherit_affected(&mut self, context: &ActionContext) -> miette::Result<()> { + let Some(check_affected) = &self.task.options.affected_files else { + return Ok(()); + }; + + // Only get files when `--affected` is passed + let mut files = if context.affected_only { + self.task + .get_affected_files(&context.touched_files, &self.project.source)? + } else { + Vec::with_capacity(0) + }; + + // If we have no files, use the task's inputs instead + if files.is_empty() && self.task.options.affected_pass_inputs { + files = self + .task + .get_input_files(&self.workspace.root)? + .into_iter() + .filter_map(|file| { + file.strip_prefix(&self.project.source) + .ok() + .map(|file| file.to_owned()) + }) + .collect(); + } + + files.sort(); + + // Set an environment variable + if matches!( + check_affected, + TaskOptionAffectedFiles::Env | TaskOptionAffectedFiles::Enabled(true) + ) { + self.command.env( + "MOON_AFFECTED_FILES", + if files.is_empty() { + ".".into() + } else { + files + .iter() + .map(|file| file.as_str()) + .collect::>() + .join(",") + }, + ); + } + + // Pass an argument + if matches!( + check_affected, + TaskOptionAffectedFiles::Args | TaskOptionAffectedFiles::Enabled(true) + ) { + if files.is_empty() { + self.command.arg_if_missing("."); + } else { + // Mimic relative from ("./") + self.command + .args(files.iter().map(|file| format!("./{file}"))); + } + } + + Ok(()) + } + + fn inherit_config(&mut self) { + // Terminal colors + if self.workspace.config.runner.inherit_colors_for_piped_tasks { + self.command.inherit_colors(); + } + } +} diff --git a/nextgen/task-runner/src/command_executor.rs b/nextgen/task-runner/src/command_executor.rs new file mode 100644 index 00000000000..228af2548d3 --- /dev/null +++ b/nextgen/task-runner/src/command_executor.rs @@ -0,0 +1,310 @@ +use moon_action::{ActionNode, ActionStatus, Operation, OperationList, OperationType}; +use moon_action_context::{ActionContext, TargetState}; +use moon_common::{color, is_ci, is_test_env}; +use moon_config::TaskOutputStyle; +use moon_console::{Console, TaskReportItem}; +use moon_process::args::join_args; +use moon_process::Command; +use moon_project::Project; +use moon_task::Task; +use moon_workspace::Workspace; +use std::sync::Arc; +use std::time::Duration; +use tokio::task::{self, JoinHandle}; +use tokio::time::sleep; +use tracing::debug; + +fn is_ci_env() -> bool { + is_ci() && !is_test_env() +} + +#[derive(Debug)] +pub struct CommandExecuteResult { + pub attempts: OperationList, + pub error: Option, + pub report_item: TaskReportItem, + pub run_state: TargetState, +} + +/// Run the command as a child process and capture its output. If the process fails +/// and `retry_count` is greater than 0, attempt the process again in case it passes. +pub struct CommandExecutor<'task> { + task: &'task Task, + project: &'task Project, + workspace: &'task Workspace, + + command: Command, + console: Arc, + handle: Option>, + + attempts: OperationList, + attempt_index: u8, + attempt_total: u8, + + // States + interactive: bool, + persistent: bool, + stream: bool, +} + +impl<'task> CommandExecutor<'task> { + pub fn new( + workspace: &'task Workspace, + project: &'task Project, + task: &'task Task, + node: &ActionNode, + console: Arc, + mut command: Command, + ) -> Self { + command.with_console(console.clone()); + + Self { + attempts: OperationList::default(), + attempt_index: 1, + attempt_total: task.options.retry_count + 1, + interactive: node.is_interactive() || task.is_interactive(), + persistent: node.is_persistent() || task.is_persistent(), + stream: false, + handle: None, + workspace, + project, + task, + command, + console, + } + } + + pub async fn execute( + mut self, + context: &ActionContext, + hash: Option<&str>, + ) -> miette::Result { + // Prepare state for the executor, and each attempt + let mut report_item = self.prepate_state(context); + let mut run_state = TargetState::Failed; + + // Hash is empty if cache is disabled + report_item.hash = hash.map(|h| h.to_string()); + + // For long-running process, log a message on an interval to indicate it's still running + self.start_monitoring(); + + // Execute the command on a loop as an attempt for every retry count we have + let execution_error: Option = loop { + let mut attempt = Operation::new(OperationType::TaskExecution); + report_item.attempt_current = self.attempt_index; + + debug!( + task = self.task.target.as_str(), + command = self.command.bin.to_str(), + "Running task (attempt {} of {})", + self.attempt_index, + self.attempt_total + ); + + self.console + .reporter + .on_task_started(&self.task.target, &attempt, &report_item)?; + + self.print_command(context)?; + + // Attempt to execute command + let mut command = self.command.create_async(); + + let attempt_result = match (self.stream, self.interactive) { + (true, true) | (false, true) => command.exec_stream_output().await, + (true, false) => command.exec_stream_and_capture_output().await, + _ => command.exec_capture_output().await, + }; + + // Handle the execution result + match attempt_result { + // Zero and non-zero exit codes + Ok(mut output) => { + debug!( + task = self.task.target.as_str(), + command = self.command.bin.to_str(), + exit_code = output.status.code(), + "Ran task, checking conditions", + ); + + attempt.finish_from_output(&mut output); + + self.console.reporter.on_task_finished( + &self.task.target, + &attempt, + &report_item, + None, + )?; + + self.attempts.push(attempt); + + // Successful execution, so break the loop + if output.status.success() { + debug!( + task = self.task.target.as_str(), + "Task was successful, proceeding to next step", + ); + + run_state = hash.map_or(TargetState::Passthrough, |hash| { + TargetState::Passed(hash.to_string()) + }); + + break None; + } + // Unsuccessful execution (maybe flaky), attempt again + else if self.attempt_index < self.attempt_total { + debug!( + task = self.task.target.as_str(), + "Task was unsuccessful, attempting again", + ); + + self.attempt_index += 1; + continue; + } + // We've hit our max attempts, so break + else { + debug!( + task = self.task.target.as_str(), + "Task was unsuccessful, failing early as we hit our max attempts", + ); + + break None; + } + } + + // Process unexpectedly crashed + Err(error) => { + debug!( + task = self.task.target.as_str(), + command = self.command.bin.to_str(), + "Failed to run task, an unexpected error occurred", + ); + + attempt.finish(ActionStatus::Failed); + + self.console.reporter.on_task_finished( + &self.task.target, + &attempt, + &report_item, + Some(&error), + )?; + + self.attempts.push(attempt); + + break Some(error); + } + } + }; + + self.stop_monitoring(); + + Ok(CommandExecuteResult { + attempts: self.attempts.take(), + error: execution_error, + report_item, + run_state, + }) + } + + fn start_monitoring(&mut self) { + if self.persistent || self.interactive { + return; + } + + let console = self.console.clone(); + let target = self.task.target.clone(); + + self.handle = Some(task::spawn(async move { + let mut secs: u32 = 0; + + loop { + sleep(Duration::from_secs(30)).await; + secs += 30; + + let _ = console.reporter.on_task_running(&target, secs); + } + })); + } + + fn stop_monitoring(&mut self) { + if let Some(handle) = self.handle.take() { + handle.abort(); + } + } + + fn prepate_state(&mut self, context: &ActionContext) -> TaskReportItem { + let is_primary = context.is_primary_target(&self.task.target); + + // When a task is configured as local (no caching), or the interactive flag is passed, + // we don't "capture" stdout/stderr (which breaks stdin) and let it stream natively. + if !self.task.options.cache && context.primary_targets.len() == 1 { + self.interactive = true; + } + + // When the primary target, always stream the output for a better developer experience. + // However, transitive targets can opt into streaming as well. + self.stream = if let Some(output_style) = &self.task.options.output_style { + matches!(output_style, TaskOutputStyle::Stream) + } else { + is_primary || is_ci_env() + }; + + // Transitive targets may run concurrently, so differentiate them with a prefix. + if self.stream && (is_ci_env() || !is_primary || context.primary_targets.len() > 1) { + let prefix_max_width = context + .primary_targets + .iter() + .map(|target| target.id.len()) + .max(); + + self.command + .set_prefix(&self.task.target.id, prefix_max_width); + } + + TaskReportItem { + attempt_current: self.attempt_index, + attempt_total: self.attempt_total, + hash: None, + output_streamed: self.stream, + output_style: self.task.options.output_style, + } + } + + fn print_command(&self, context: &ActionContext) -> miette::Result<()> { + if !self.workspace.config.runner.log_running_command { + return Ok(()); + } + + let task = &self.task; + + let mut args = vec![&task.command]; + args.extend(&task.args); + + if context.should_inherit_args(&task.target) { + args.extend(&context.passthrough_args); + } + + let command_line = join_args(args); + + let message = color::muted_light(self.command.inspect().format_command( + &command_line, + &self.workspace.root, + Some(if task.options.run_from_workspace_root { + &self.workspace.root + } else { + &self.project.root + }), + )); + + self.console.out.write_line(message)?; + + Ok(()) + } +} + +impl<'task> Drop for CommandExecutor<'task> { + fn drop(&mut self) { + self.stop_monitoring(); + } +} diff --git a/nextgen/task-runner/src/lib.rs b/nextgen/task-runner/src/lib.rs new file mode 100644 index 00000000000..1659a92cd0d --- /dev/null +++ b/nextgen/task-runner/src/lib.rs @@ -0,0 +1,12 @@ +// Public for tests +pub mod command_builder; +pub mod command_executor; +pub mod output_archiver; +pub mod output_hydrater; +mod run_state; +mod task_runner; +mod task_runner_error; + +pub use run_state::*; +pub use task_runner::*; +pub use task_runner_error::*; diff --git a/nextgen/task-runner/src/output_archiver.rs b/nextgen/task-runner/src/output_archiver.rs new file mode 100644 index 00000000000..fbb5d75eebd --- /dev/null +++ b/nextgen/task-runner/src/output_archiver.rs @@ -0,0 +1,188 @@ +use crate::task_runner_error::TaskRunnerError; +use moon_common::color; +use moon_config::ProjectConfig; +use moon_task::{TargetError, TargetScope, Task}; +use moon_workspace::Workspace; +use starbase_archive::tar::TarPacker; +use starbase_archive::Archiver; +use starbase_utils::glob; +use std::path::{Path, PathBuf}; +use tracing::{debug, warn}; + +/// Cache outputs to the `.moon/cache/outputs` folder and to the cloud, +/// so that subsequent builds are faster, and any local outputs +/// can be hydrated easily. +pub struct OutputArchiver<'task> { + pub project_config: &'task ProjectConfig, + pub task: &'task Task, + pub workspace: &'task Workspace, +} + +impl<'task> OutputArchiver<'task> { + pub async fn archive(&self, hash: &str) -> miette::Result> { + if !self.is_archivable()? { + return Ok(None); + } + + // Check that outputs actually exist + if !self.has_outputs_been_created(false)? { + return Err(TaskRunnerError::MissingOutputs { + target: self.task.target.to_string(), + } + .into()); + } + + // If so, create and pack the archive! + let archive_file = self.workspace.cache_engine.hash.get_archive_path(hash); + + if !archive_file.exists() { + if !self.workspace.cache_engine.get_mode().is_writable() { + debug!( + task = self.task.target.as_str(), + hash, "Cache is not writable, skipping output archiving" + ); + + return Ok(None); + } + + debug!( + task = self.task.target.as_str(), + hash, "Archiving task outputs from project" + ); + + self.create_local_archive(hash, &archive_file)?; + + if archive_file.exists() { + self.upload_to_remote_storage(hash, &archive_file).await?; + } + } + + Ok(Some(archive_file)) + } + + pub fn is_archivable(&self) -> miette::Result { + let task = self.task; + + if task.is_build_type() { + return Ok(true); + } + + for target in &self.workspace.config.runner.archivable_targets { + let is_matching_task = task.target.task_id == target.task_id; + + match &target.scope { + TargetScope::All => { + if is_matching_task { + return Ok(true); + } + } + TargetScope::Project(project_locator) => { + if let Some(owner_id) = task.target.get_project_id() { + if owner_id == project_locator && is_matching_task { + return Ok(true); + } + } + } + TargetScope::Tag(tag_id) => { + if self.project_config.tags.contains(tag_id) && is_matching_task { + return Ok(true); + } + } + TargetScope::Deps => return Err(TargetError::NoDepsInRunContext.into()), + TargetScope::OwnSelf => return Err(TargetError::NoSelfInRunContext.into()), + }; + } + + Ok(false) + } + + pub fn has_outputs_been_created(&self, bypass_globs: bool) -> miette::Result { + let has_globs = !self.task.output_globs.is_empty(); + let all_negated_globs = self + .task + .output_globs + .iter() + .all(|glob| glob.as_str().starts_with('!')); + + // If using globs, we have no way to truly determine if all outputs + // exist on the current file system, so always hydrate... + if bypass_globs && has_globs && !all_negated_globs { + return Ok(false); + } + + // Check paths first since they are literal + for output in &self.task.output_files { + if !output.to_path(&self.workspace.root).exists() { + return Ok(false); + } + } + + // Check globs last, as they are costly! + // If all globs are negated, then the empty check will always + // fail, resulting in archives not being created + if has_globs && !all_negated_globs { + let outputs = glob::walk_files(&self.workspace.root, &self.task.output_globs)?; + + return Ok(!outputs.is_empty()); + } + + Ok(true) + } + + pub fn create_local_archive(&self, hash: &str, archive_file: &Path) -> miette::Result<()> { + debug!( + task = self.task.target.as_str(), + hash, + archive_file = ?archive_file, "Creating archive file" + ); + + // Create the archiver instance based on task outputs + let mut archive = Archiver::new(&self.workspace.root, archive_file); + + for output_file in &self.task.output_files { + archive.add_source_file(output_file.as_str(), None); + } + + for output_glob in &self.task.output_globs { + archive.add_source_glob(output_glob.as_str()); + } + + // Also include stdout/stderr logs in the tarball + let state_dir = self + .workspace + .cache_engine + .state + .get_target_dir(&self.task.target); + + archive.add_source_file(state_dir.join("stdout.log"), None); + + archive.add_source_file(state_dir.join("stderr.log"), None); + + // Pack the archive + if let Err(error) = archive.pack(TarPacker::new_gz) { + warn!( + task = self.task.target.as_str(), + hash, + archive_file = ?archive_file, + "Failed to package outputs into archive: {}", + color::muted_light(error.to_string()), + ); + } + + Ok(()) + } + + pub async fn upload_to_remote_storage( + &self, + hash: &str, + archive_file: &Path, + ) -> miette::Result<()> { + if let Some(moonbase) = &self.workspace.session { + moonbase + .upload_artifact_to_remote_storage(hash, archive_file, &self.task.target.id) + .await?; + } + + Ok(()) + } +} diff --git a/nextgen/task-runner/src/output_hydrater.rs b/nextgen/task-runner/src/output_hydrater.rs new file mode 100644 index 00000000000..9c153241e78 --- /dev/null +++ b/nextgen/task-runner/src/output_hydrater.rs @@ -0,0 +1,110 @@ +use moon_common::color; +use moon_task::Task; +use moon_workspace::Workspace; +use starbase_archive::tar::TarUnpacker; +use starbase_archive::Archiver; +use starbase_utils::fs; +use std::path::Path; +use tracing::{debug, warn}; + +#[derive(Clone, Copy, Debug, PartialEq)] +pub enum HydrateFrom { + LocalCache, + PreviousOutput, + RemoteCache, +} + +pub struct OutputHydrater<'task> { + pub task: &'task Task, + pub workspace: &'task Workspace, +} + +impl<'task> OutputHydrater<'task> { + pub async fn hydrate(&self, hash: &str, from: HydrateFrom) -> miette::Result { + // Only hydrate when the hash is different from the previous build, + // as we can assume the outputs from the previous build still exist? + if matches!(from, HydrateFrom::PreviousOutput) { + return Ok(true); + } + + let archive_file = self.workspace.cache_engine.hash.get_archive_path(hash); + + if self.workspace.cache_engine.get_mode().is_readable() { + debug!( + task = self.task.target.as_str(), + hash, "Hydrating cached outputs into project" + ); + + // Attempt to download from remote cache to `.moon/outputs/` + if !archive_file.exists() && matches!(from, HydrateFrom::RemoteCache) { + self.download_from_remote_storage(hash, &archive_file) + .await?; + } + + // Otherwise hydrate the cached archive into the task's outputs + if archive_file.exists() { + self.unpack_local_archive(hash, &archive_file)?; + + return Ok(true); + } + } else { + debug!( + task = self.task.target.as_str(), + hash, "Cache is not readable, skipping output hydration" + ); + } + + Ok(false) + } + + pub fn unpack_local_archive(&self, hash: &str, archive_file: &Path) -> miette::Result { + debug!( + task = self.task.target.as_str(), + hash, + archive_file = ?archive_file, "Unpacking archive into project" + ); + + // Create the archiver instance based on task outputs + let mut archive = Archiver::new(&self.workspace.root, archive_file); + + for output_file in &self.task.output_files { + archive.add_source_file(output_file.as_str(), None); + } + + for output_glob in &self.task.output_globs { + archive.add_source_glob(output_glob.as_str()); + } + + // Unpack the archive + if let Err(error) = archive.unpack(TarUnpacker::new_gz) { + warn!( + task = self.task.target.as_str(), + hash, + archive_file = ?archive_file, + "Failed to hydrate outputs from archive: {}", + color::muted_light(error.to_string()), + ); + + // Delete target outputs to ensure a clean slate + for output in &self.task.output_files { + fs::remove_file(output.to_logical_path(&self.workspace.root))?; + } + } + + Ok(true) + } + + pub async fn download_from_remote_storage( + &self, + hash: &str, + archive_file: &Path, + ) -> miette::Result<()> { + if let Some(moonbase) = &self.workspace.session { + moonbase + .download_artifact_from_remote_storage(hash, archive_file) + .await?; + } + + Ok(()) + } +} diff --git a/nextgen/task-runner/src/run_state.rs b/nextgen/task-runner/src/run_state.rs new file mode 100644 index 00000000000..bc737cc87aa --- /dev/null +++ b/nextgen/task-runner/src/run_state.rs @@ -0,0 +1,10 @@ +use moon_cache_item::cache_item; + +cache_item!( + pub struct TaskRunState { + pub exit_code: i32, + pub hash: String, + pub last_run_time: u128, + pub target: String, + } +); diff --git a/nextgen/task-runner/src/task_runner.rs b/nextgen/task-runner/src/task_runner.rs new file mode 100644 index 00000000000..72e7067163d --- /dev/null +++ b/nextgen/task-runner/src/task_runner.rs @@ -0,0 +1,651 @@ +use crate::command_builder::CommandBuilder; +use crate::command_executor::CommandExecutor; +use crate::output_archiver::OutputArchiver; +use crate::output_hydrater::{HydrateFrom, OutputHydrater}; +use crate::run_state::TaskRunState; +use crate::task_runner_error::TaskRunnerError; +use moon_action::{ActionNode, ActionStatus, Operation, OperationList, OperationType}; +use moon_action_context::{ActionContext, TargetState}; +use moon_cache::CacheItem; +use moon_console::{Console, TaskReportItem}; +use moon_platform::PlatformManager; +use moon_process::ProcessError; +use moon_project::Project; +use moon_task::Task; +use moon_task_hasher::TaskHasher; +use moon_time::now_millis; +use moon_workspace::Workspace; +use starbase_utils::fs; +use std::collections::BTreeMap; +use std::sync::Arc; +use tracing::{debug, trace}; + +#[derive(Debug)] +pub struct TaskRunResult { + pub hash: Option, + pub operations: OperationList, +} + +pub struct TaskRunner<'task> { + project: &'task Project, + pub task: &'task Task, + workspace: &'task Workspace, + platform_manager: &'task PlatformManager, + + archiver: OutputArchiver<'task>, + console: Arc, + hydrater: OutputHydrater<'task>, + + // Public for testing + pub cache: CacheItem, + pub operations: OperationList, +} + +impl<'task> TaskRunner<'task> { + pub fn new( + workspace: &'task Workspace, + project: &'task Project, + task: &'task Task, + console: Arc, + ) -> miette::Result { + debug!( + task = task.target.as_str(), + "Creating a task runner for target" + ); + + let mut cache = workspace + .cache_engine + .state + .load_target_state::(&task.target)?; + + if cache.data.target.is_empty() { + cache.data.target = task.target.to_string(); + } + + Ok(Self { + cache, + console, + archiver: OutputArchiver { + project_config: &project.config, + task, + workspace, + }, + hydrater: OutputHydrater { task, workspace }, + platform_manager: PlatformManager::read(), + project, + task, + workspace, + operations: OperationList::default(), + }) + } + + pub fn set_platform_manager(&mut self, manager: &'task PlatformManager) { + self.platform_manager = manager; + } + + async fn internal_run( + &mut self, + context: &ActionContext, + node: &ActionNode, + ) -> miette::Result> { + // If a dependency has failed or been skipped, we should skip this task + if !self.is_dependencies_complete(context)? { + self.skip(context)?; + + return Ok(None); + } + + // If the task is a no-operation, we should exit early + if self.task.is_no_op() { + self.skip_noop(context)?; + + return Ok(None); + } + + // If cache is enabled, then generate a hash and manage outputs + if self.is_cache_enabled() { + debug!( + task = self.task.target.as_str(), + "Caching is enabled for task, will generate a hash and manage outputs" + ); + + let hash = self.generate_hash(context, node).await?; + + // Exit early if this build has already been cached/hashed + if self.hydrate(context, &hash).await? { + return Ok(Some(hash)); + } + + // Otherwise build and execute the command as a child process + self.execute(context, node, Some(&hash)).await?; + + // If we created outputs, archive them into the cache + self.archive(&hash).await?; + + return Ok(Some(hash)); + } + + debug!( + task = self.task.target.as_str(), + "Caching is disabled for task, will not generate a hash, and will attempt to run a command as normal" + ); + + // Otherwise build and execute the command as a child process + self.execute(context, node, None).await?; + + Ok(None) + } + + pub async fn run( + &mut self, + context: &ActionContext, + node: &ActionNode, + ) -> miette::Result { + let result = self.internal_run(context, node).await; + + self.cache.data.last_run_time = now_millis(); + self.cache.save()?; + + // We lose the attempt state here, is that ok? + let mut item = TaskReportItem::default(); + + match result { + Ok(maybe_hash) => { + item.hash = maybe_hash.clone(); + + self.console.reporter.on_task_completed( + &self.task.target, + &self.operations, + &item, + None, + )?; + + Ok(TaskRunResult { + hash: maybe_hash, + operations: self.operations.take(), + }) + } + Err(error) => { + self.console.reporter.on_task_completed( + &self.task.target, + &self.operations, + &item, + Some(&error), + )?; + + Err(error) + } + } + } + + pub async fn is_cached(&mut self, hash: &str) -> miette::Result> { + let cache_engine = &self.workspace.cache_engine; + + debug!( + task = self.task.target.as_str(), + hash, "Checking if task has been cached using hash" + ); + + // If hash is the same as the previous build, we can simply abort! + // However, ensure the outputs also exist, otherwise we should hydrate + if self.cache.data.exit_code == 0 + && self.cache.data.hash == hash + && self.archiver.has_outputs_been_created(true)? + { + debug!( + task = self.task.target.as_str(), + hash, "Hash matches previous run, reusing existing outputs" + ); + + return Ok(Some(HydrateFrom::PreviousOutput)); + } + + if !cache_engine.get_mode().is_readable() { + debug!( + task = self.task.target.as_str(), + hash, "Cache is not readable, continuing run" + ); + + return Ok(None); + } + + // Set this *after* we checked the previous outputs above + self.cache.data.hash = hash.to_owned(); + + // If the previous run was a failure, avoid hydrating + if self.cache.data.exit_code > 0 { + debug!( + task = self.task.target.as_str(), + hash, "Previous run failed, avoiding hydration" + ); + + return Ok(None); + } + + // Check to see if a build with the provided hash has been cached locally. + // We only check for the archive, as the manifest is purely for local debugging! + let archive_file = cache_engine.hash.get_archive_path(hash); + + if archive_file.exists() { + debug!( + task = self.task.target.as_str(), + hash, + archive_file = ?archive_file, + "Cache hit in local cache, will reuse existing archive" + ); + + return Ok(Some(HydrateFrom::LocalCache)); + } + + // Check if archive exists in moonbase (remote storage) by querying the artifacts + // endpoint. This only checks that the database record exists! + if let Some(moonbase) = &self.workspace.session { + if let Some((artifact, _)) = moonbase.read_artifact(hash).await? { + debug!( + task = self.task.target.as_str(), + hash, + artifact_id = artifact.id, + "Cache hit in remote cache, will attempt to download the archive" + ); + + return Ok(Some(HydrateFrom::RemoteCache)); + } + } + + debug!( + task = self.task.target.as_str(), + hash, "Cache miss, continuing run" + ); + + Ok(None) + } + + pub fn is_cache_enabled(&self) -> bool { + // If the VCS root does not exist (like in a Docker container), + // we should avoid failing and simply disable caching + self.task.options.cache && self.workspace.vcs.is_enabled() + } + + pub fn is_dependencies_complete(&self, context: &ActionContext) -> miette::Result { + if self.task.deps.is_empty() { + return Ok(true); + } + + for dep in &self.task.deps { + if let Some(dep_state) = context.target_states.get(&dep.target) { + if dep_state.get().is_complete() { + continue; + } + + debug!( + task = self.task.target.as_str(), + dependency = dep.target.as_str(), + "Task dependency has failed or has been skipped, skipping this task", + ); + + return Ok(false); + } else { + return Err(TaskRunnerError::MissingDependencyHash { + dep_target: dep.target.id.to_owned(), + target: self.task.target.id.to_owned(), + } + .into()); + } + } + + Ok(true) + } + + pub async fn generate_hash( + &mut self, + context: &ActionContext, + node: &ActionNode, + ) -> miette::Result { + debug!( + task = self.task.target.as_str(), + "Generating a unique hash for this task" + ); + + let mut operation = Operation::new(OperationType::HashGeneration); + let mut hasher = self.workspace.cache_engine.hash.create_hasher(node.label()); + + // Hash common fields + trace!( + task = self.task.target.as_str(), + "Including common task related fields in the hash" + ); + + let mut task_hasher = TaskHasher::new( + self.project, + self.task, + &self.workspace.vcs, + &self.workspace.root, + &self.workspace.config.hasher, + ); + + if context.should_inherit_args(&self.task.target) { + task_hasher.hash_args(&context.passthrough_args); + } + + task_hasher.hash_deps({ + let mut deps = BTreeMap::default(); + + for dep in &self.task.deps { + if let Some(entry) = context.target_states.get(&dep.target) { + match entry.get() { + TargetState::Passed(hash) => { + deps.insert(&dep.target, hash.clone()); + } + TargetState::Passthrough => { + deps.insert(&dep.target, "passthrough".into()); + } + _ => {} + }; + } + } + + deps + }); + + task_hasher.hash_inputs().await?; + + hasher.hash_content(task_hasher.hash())?; + + // Hash platform fields + trace!( + task = self.task.target.as_str(), + platform = ?self.task.platform, + "Including toolchain specific fields in the hash" + ); + + self.platform_manager + .get(self.task.platform)? + .hash_run_target( + self.project, + node.get_runtime(), + &mut hasher, + &self.workspace.config.hasher, + ) + .await?; + + let hash = self.workspace.cache_engine.hash.save_manifest(hasher)?; + + operation.hash = Some(hash.clone()); + operation.finish(ActionStatus::Passed); + + self.operations.push(operation); + + debug!( + task = self.task.target.as_str(), + hash = &hash, + "Generated a unique hash" + ); + + Ok(hash) + } + + pub async fn execute( + &mut self, + context: &ActionContext, + node: &ActionNode, + hash: Option<&str>, + ) -> miette::Result<()> { + debug!( + task = self.task.target.as_str(), + "Building and executing the task command" + ); + + // Build the command from the current task + let mut builder = CommandBuilder::new(self.workspace, self.project, self.task, node); + builder.set_platform_manager(self.platform_manager); + + let command = builder.build(context).await?; + + // Execute the command and gather all attempts made + let executor = CommandExecutor::new( + self.workspace, + self.project, + self.task, + node, + self.console.clone(), + command, + ); + + let result = if let Some(mutex_name) = &self.task.options.mutex { + let mut operation = Operation::new(OperationType::MutexAcquisition); + + debug!( + task = self.task.target.as_str(), + mutex = mutex_name, + "Waiting to acquire task mutex lock" + ); + + let mutex = context.get_or_create_mutex(mutex_name); + let _guard = mutex.lock().await; + + debug!( + task = self.task.target.as_str(), + mutex = mutex_name, + "Acquired task mutex lock" + ); + + operation.finish(ActionStatus::Passed); + + self.operations.push(operation); + + // This execution is required within this block so that the + // guard above isn't immediately dropped! + executor.execute(context, hash).await? + } else { + executor.execute(context, hash).await? + }; + + if let Some(last_attempt) = result.attempts.get_last_execution() { + self.save_logs(last_attempt)?; + } + + // Update the action state based on the result + context.set_target_state(&self.task.target, result.run_state); + + // Extract the attempts from the result + self.operations.merge(result.attempts); + + // If the execution as a whole failed, return the error. + // We do this here instead of in `execute` so that we can + // capture the attempts and report them. + if let Some(result_error) = result.error { + return Err(result_error); + } + + // If our last task execution was a failure, return a hard error + if let Some(last_attempt) = self.operations.get_last_execution() { + if last_attempt.has_failed() { + return Err(TaskRunnerError::RunFailed { + target: self.task.target.to_string(), + error: ProcessError::ExitNonZero { + bin: self.task.command.clone(), + code: last_attempt.get_exit_code(), + }, + } + .into()); + } + } + + Ok(()) + } + + pub fn skip(&mut self, context: &ActionContext) -> miette::Result<()> { + debug!(task = self.task.target.as_str(), "Skipping task"); + + self.operations.push(Operation::new_finished( + OperationType::TaskExecution, + ActionStatus::Skipped, + )); + + context.set_target_state(&self.task.target, TargetState::Skipped); + + Ok(()) + } + + pub fn skip_noop(&mut self, context: &ActionContext) -> miette::Result<()> { + debug!( + task = self.task.target.as_str(), + "Skipping task as its a no-operation" + ); + + self.operations.push(Operation::new_finished( + OperationType::NoOperation, + ActionStatus::Passed, + )); + + context.set_target_state(&self.task.target, TargetState::Passthrough); + + Ok(()) + } + + pub async fn archive(&mut self, hash: &str) -> miette::Result { + let mut operation = Operation::new(OperationType::ArchiveCreation); + + debug!( + task = self.task.target.as_str(), + "Running cache archiving operation" + ); + + let archived = match self.archiver.archive(hash).await? { + Some(archive_file) => { + debug!( + task = self.task.target.as_str(), + archive_file = ?archive_file, + "Ran cache archiving operation" + ); + + operation.finish(ActionStatus::Passed); + + true + } + None => { + debug!(task = self.task.target.as_str(), "Nothing to archive"); + + operation.finish(ActionStatus::Skipped); + + false + } + }; + + self.operations.push(operation); + + Ok(archived) + } + + pub async fn hydrate(&mut self, context: &ActionContext, hash: &str) -> miette::Result { + let mut operation = Operation::new(OperationType::OutputHydration); + + debug!( + task = self.task.target.as_str(), + "Running cache hydration operation" + ); + + // Not cached + let Some(from) = self.is_cached(hash).await? else { + debug!(task = self.task.target.as_str(), "Nothing to hydrate"); + + operation.finish(ActionStatus::Skipped); + + self.operations.push(operation); + + return Ok(false); + }; + + // Did not hydrate + if !self.hydrater.hydrate(hash, from).await? { + debug!(task = self.task.target.as_str(), "Did not hydrate"); + + operation.finish(ActionStatus::Invalid); + + self.operations.push(operation); + + return Ok(false); + } + + // Did hydrate + debug!( + task = self.task.target.as_str(), + "Ran cache hydration operation" + ); + + self.load_logs(&mut operation)?; + + operation.finish(match from { + HydrateFrom::RemoteCache => ActionStatus::CachedFromRemote, + _ => ActionStatus::Cached, + }); + + context.set_target_state(&self.task.target, TargetState::Passed(hash.to_owned())); + + self.operations.push(operation); + + Ok(true) + } + + fn load_logs(&self, operation: &mut Operation) -> miette::Result<()> { + let state_dir = self + .workspace + .cache_engine + .state + .get_target_dir(&self.task.target); + let err_path = state_dir.join("stderr.log"); + let out_path = state_dir.join("stdout.log"); + + let output = operation.output.get_or_insert(Default::default()); + + output.exit_code = Some(self.cache.data.exit_code); + + if err_path.exists() { + output.set_stderr(fs::read_file(err_path)?); + } + + if out_path.exists() { + output.set_stdout(fs::read_file(out_path)?); + } + + Ok(()) + } + + fn save_logs(&mut self, operation: &Operation) -> miette::Result<()> { + let state_dir = self + .workspace + .cache_engine + .state + .get_target_dir(&self.task.target); + let err_path = state_dir.join("stderr.log"); + let out_path = state_dir.join("stdout.log"); + + if let Some(output) = &operation.output { + self.cache.data.exit_code = operation.get_exit_code(); + + fs::write_file( + err_path, + output + .stderr + .as_ref() + .map(|log| log.as_bytes()) + .unwrap_or_default(), + )?; + + fs::write_file( + out_path, + output + .stdout + .as_ref() + .map(|log| log.as_bytes()) + .unwrap_or_default(), + )?; + } else { + // Ensure logs from a previous run are removed + fs::remove_file(err_path)?; + fs::remove_file(out_path)?; + } + + Ok(()) + } +} diff --git a/crates/core/runner/src/errors.rs b/nextgen/task-runner/src/task_runner_error.rs similarity index 52% rename from crates/core/runner/src/errors.rs rename to nextgen/task-runner/src/task_runner_error.rs index 8845106a403..2a92bfc6e0c 100644 --- a/crates/core/runner/src/errors.rs +++ b/nextgen/task-runner/src/task_runner_error.rs @@ -1,32 +1,30 @@ use miette::Diagnostic; +use moon_common::{Style, Stylize}; use moon_process::ProcessError; -use starbase_styles::{Style, Stylize}; use thiserror::Error; #[derive(Error, Debug, Diagnostic)] -pub enum RunnerError { +pub enum TaskRunnerError { #[diagnostic(code(task_runner::run_failed))] #[error( - "Task {} failed to run. View hash details with {}.", + "Task {} failed to run.", .target.style(Style::Label), - .query.style(Style::Shell), )] RunFailed { target: String, - query: String, #[source] error: ProcessError, }, - #[diagnostic(code(task_runner::missing_dep_hash))] + #[diagnostic(code(task_runner::missing_dependency_hash))] #[error( "Encountered a missing hash for target {}, which is a dependency of {}.\nThis either means the dependency hasn't ran, has failed, or there's a misconfiguration.\n\nTry disabling the target's cache, or marking it as local.", - .0.style(Style::Label), - .1.style(Style::Label), + .dep_target.style(Style::Label), + .target.style(Style::Label), )] - MissingDependencyHash(String, String), + MissingDependencyHash { dep_target: String, target: String }, - #[diagnostic(code(task_runner::missing_output))] - #[error("Target {} defines outputs, but none exist after being ran.", .0.style(Style::Label))] - MissingOutput(String), + #[diagnostic(code(task_runner::missing_outputs))] + #[error("Task {} defines outputs, but none exist after being ran.", .target.style(Style::Label))] + MissingOutputs { target: String }, } diff --git a/nextgen/task-runner/tests/__fixtures__/archive/.moon/workspace.yml b/nextgen/task-runner/tests/__fixtures__/archive/.moon/workspace.yml new file mode 100644 index 00000000000..60896f17c4d --- /dev/null +++ b/nextgen/task-runner/tests/__fixtures__/archive/.moon/workspace.yml @@ -0,0 +1,2 @@ +projects: + - 'project' diff --git a/nextgen/task-runner/tests/__fixtures__/archive/project/moon.yml b/nextgen/task-runner/tests/__fixtures__/archive/project/moon.yml new file mode 100644 index 00000000000..8914ac9a2e2 --- /dev/null +++ b/nextgen/task-runner/tests/__fixtures__/archive/project/moon.yml @@ -0,0 +1,89 @@ +tags: ['cache'] + +tasks: + no-outputs: + command: noop + + file-outputs: + command: noop + outputs: + - 'file.txt' + + file-outputs-negated: + command: noop + outputs: + - 'a.txt' + - '!b.txt' + - 'c.txt' + + glob-outputs: + command: noop + outputs: + - '*.txt' + + glob-outputs-negated: + command: noop + outputs: + - '*.txt' + - '!b.txt' + + negated-outputs-only: + command: noop + outputs: + - '!*.txt' + + build-type: + command: noop + type: build + outputs: + - '*.txt' + + run-type: + command: noop + type: run + + test-type: + command: noop + type: test + + output-one-file: + command: noop + outputs: + - 'file.txt' + + output-many-files: + command: noop + outputs: + - 'a.txt' + - 'b.txt' + - 'c.txt' + + output-one-dir: + command: noop + outputs: + - 'dir' + + output-many-dirs: + command: noop + outputs: + - 'a' + - 'b/**/*' + - 'c' + + output-file-and-dir: + command: noop + outputs: + - 'file.txt' + - 'dir' + + output-workspace: + command: noop + outputs: + - '/root.txt' + - '/shared/*.txt' + + output-workspace-and-project: + command: noop + outputs: + - '/root.txt' + - 'file.txt' diff --git a/nextgen/task-runner/tests/__fixtures__/builder/.moon/workspace.yml b/nextgen/task-runner/tests/__fixtures__/builder/.moon/workspace.yml new file mode 100644 index 00000000000..60896f17c4d --- /dev/null +++ b/nextgen/task-runner/tests/__fixtures__/builder/.moon/workspace.yml @@ -0,0 +1,2 @@ +projects: + - 'project' diff --git a/nextgen/task-runner/tests/__fixtures__/builder/project/input.txt b/nextgen/task-runner/tests/__fixtures__/builder/project/input.txt new file mode 100644 index 00000000000..e69de29bb2d diff --git a/nextgen/task-runner/tests/__fixtures__/builder/project/moon.yml b/nextgen/task-runner/tests/__fixtures__/builder/project/moon.yml new file mode 100644 index 00000000000..34558271915 --- /dev/null +++ b/nextgen/task-runner/tests/__fixtures__/builder/project/moon.yml @@ -0,0 +1,9 @@ +tasks: + base: + command: 'build arg --opt' + platform: system + env: + KEY: value + inputs: + - 'input.txt' + - 'file.*' diff --git a/nextgen/task-runner/tests/__fixtures__/runner/.moon/workspace.yml b/nextgen/task-runner/tests/__fixtures__/runner/.moon/workspace.yml new file mode 100644 index 00000000000..5af4ad746e0 --- /dev/null +++ b/nextgen/task-runner/tests/__fixtures__/runner/.moon/workspace.yml @@ -0,0 +1,4 @@ +projects: + - 'project' + - 'unix' + - 'windows' diff --git a/nextgen/task-runner/tests/__fixtures__/runner/project/moon.yml b/nextgen/task-runner/tests/__fixtures__/runner/project/moon.yml new file mode 100644 index 00000000000..b3cfc3b078f --- /dev/null +++ b/nextgen/task-runner/tests/__fixtures__/runner/project/moon.yml @@ -0,0 +1,24 @@ +tasks: + base: + command: noop + + outputs: + command: noop + outputs: + - 'file.txt' + + dep: + command: noop + + has-deps: + command: noop + deps: + - 'dep' + + no-deps: + command: noop + + no-cache: + command: noop + options: + cache: false diff --git a/nextgen/task-runner/tests/__fixtures__/runner/unix/moon.yml b/nextgen/task-runner/tests/__fixtures__/runner/unix/moon.yml new file mode 100644 index 00000000000..39c19ba8656 --- /dev/null +++ b/nextgen/task-runner/tests/__fixtures__/runner/unix/moon.yml @@ -0,0 +1,52 @@ +tasks: + success: + command: 'echo "test"' + platform: system + options: + shell: true + + failure: + command: 'exit 1' + platform: system + options: + shell: true + + retry: + command: 'exit 1' + platform: system + options: + shell: true + retryCount: 3 + + create-file: + command: 'touch file.txt' + outputs: + - 'file.txt' + platform: system + options: + shell: true + + with-mutex: + extends: success + options: + mutex: lock + + without-cache: + extends: success + options: + cache: false + + hash-inputs: + extends: success + inputs: + - '*.txt' + + missing-output: + extends: success + outputs: + - 'file.txt' + + missing-output-glob: + extends: success + outputs: + - '*.txt' diff --git a/nextgen/task-runner/tests/__fixtures__/runner/windows/moon.yml b/nextgen/task-runner/tests/__fixtures__/runner/windows/moon.yml new file mode 100644 index 00000000000..fb653dbfc11 --- /dev/null +++ b/nextgen/task-runner/tests/__fixtures__/runner/windows/moon.yml @@ -0,0 +1,52 @@ +tasks: + success: + command: 'Write-Output "test"' + platform: system + options: + shell: true + + failure: + command: 'Exit 1' + platform: system + options: + shell: true + + retry: + command: 'Exit 1' + platform: system + options: + shell: true + retryCount: 3 + + create-file: + command: 'New-Item file.txt' + outputs: + - 'file.txt' + platform: system + options: + shell: true + + with-mutex: + extends: success + options: + mutex: lock + + without-cache: + extends: success + options: + cache: false + + hash-inputs: + extends: success + inputs: + - '*.txt' + + missing-output: + extends: success + outputs: + - 'file.txt' + + missing-output-glob: + extends: success + outputs: + - '*.txt' diff --git a/nextgen/task-runner/tests/command_builder_test.rs b/nextgen/task-runner/tests/command_builder_test.rs new file mode 100644 index 00000000000..4c19841a38d --- /dev/null +++ b/nextgen/task-runner/tests/command_builder_test.rs @@ -0,0 +1,371 @@ +#![allow(clippy::field_reassign_with_default)] + +mod utils; + +use moon_action::ActionNode; +use moon_action_context::ActionContext; +use moon_config::TaskOptionAffectedFiles; +use moon_process::Command; +use moon_task::{Target, TargetLocator}; +use std::ffi::OsString; +use utils::*; + +fn get_env<'a>(command: &'a Command, key: &str) -> Option<&'a str> { + command + .env + .get(&OsString::from(key)) + .map(|v| v.to_str().unwrap()) +} + +fn get_args(command: &Command) -> Vec<&str> { + command + .args + .iter() + .map(|arg| arg.to_str().unwrap()) + .collect() +} + +mod command_builder { + use super::*; + + #[tokio::test] + async fn sets_cwd_to_project_root() { + let container = TaskRunnerContainer::new("builder").await; + let command = container.create_command(ActionContext::default()).await; + + assert_eq!(command.cwd, Some(container.sandbox.path().join("project"))); + } + + #[tokio::test] + async fn sets_cwd_to_workspace_root() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |task, _| { + task.options.run_from_workspace_root = true; + }) + .await; + + assert_eq!(command.cwd, Some(container.sandbox.path().to_path_buf())); + } + + mod args { + use super::*; + + #[tokio::test] + async fn inherits_task_args() { + let container = TaskRunnerContainer::new("builder").await; + let command = container.create_command(ActionContext::default()).await; + + assert_eq!(get_args(&command), vec!["arg", "--opt"]); + } + + #[tokio::test] + async fn inherits_when_a_task_dep() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |_, node| { + if let ActionNode::RunTask(inner) = node { + inner.args.push("extra-arg".into()); + } + }) + .await; + + assert_eq!(get_args(&command), vec!["arg", "--opt", "extra-arg"]); + } + + #[tokio::test] + async fn inherits_passthrough_args_when_a_primary_target() { + let container = TaskRunnerContainer::new("builder").await; + + let mut context = ActionContext::default(); + context.passthrough_args.push("--passthrough".into()); + context + .primary_targets + .insert(Target::new("project", "base").unwrap()); + + let command = container.create_command(context).await; + + assert_eq!(get_args(&command), vec!["arg", "--opt", "--passthrough"]); + } + + #[tokio::test] + async fn inherits_passthrough_args_when_an_all_initial_target() { + let container = TaskRunnerContainer::new("builder").await; + + let mut context = ActionContext::default(); + context.passthrough_args.push("--passthrough".into()); + context + .initial_targets + .insert(TargetLocator::Qualified(Target::parse(":base").unwrap())); + + let command = container.create_command(context).await; + + assert_eq!(get_args(&command), vec!["arg", "--opt", "--passthrough"]); + } + + #[tokio::test] + async fn doesnt_inherit_passthrough_args_when_not_a_target() { + let container = TaskRunnerContainer::new("builder").await; + + let mut context = ActionContext::default(); + context.passthrough_args.push("--passthrough".into()); + context + .primary_targets + .insert(Target::new("other-project", "base").unwrap()); + + let command = container.create_command(context).await; + + assert_eq!(get_args(&command), vec!["arg", "--opt"]); + } + + #[tokio::test] + async fn passthrough_comes_after_node_deps() { + let container = TaskRunnerContainer::new("builder").await; + + let mut context = ActionContext::default(); + context.passthrough_args.push("--passthrough".into()); + context + .primary_targets + .insert(Target::new("project", "base").unwrap()); + + let command = container + .create_command_with_config(context, |_, node| { + if let ActionNode::RunTask(inner) = node { + inner.args.push("extra-arg".into()); + } + }) + .await; + + assert_eq!( + get_args(&command), + vec!["arg", "--opt", "extra-arg", "--passthrough"] + ); + } + } + + mod env { + use super::*; + + #[tokio::test] + async fn sets_pwd() { + let container = TaskRunnerContainer::new("builder").await; + let command = container.create_command(ActionContext::default()).await; + + assert_eq!( + get_env(&command, "PWD").unwrap(), + container.sandbox.path().join("project").to_str().unwrap() + ); + } + + #[tokio::test] + async fn inherits_task_env() { + let container = TaskRunnerContainer::new("builder").await; + let command = container.create_command(ActionContext::default()).await; + + assert_eq!(get_env(&command, "KEY").unwrap(), "value"); + } + + #[tokio::test] + async fn inherits_when_a_task_dep() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |_, node| { + if let ActionNode::RunTask(inner) = node { + inner.env.insert("ANOTHER".into(), "value".into()); + } + }) + .await; + + assert_eq!(get_env(&command, "ANOTHER").unwrap(), "value"); + } + + #[tokio::test] + async fn can_overwrite_env_via_task_dep() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |_, node| { + if let ActionNode::RunTask(inner) = node { + inner.env.insert("KEY".into(), "overwritten".into()); + } + }) + .await; + + assert_eq!(get_env(&command, "KEY").unwrap(), "overwritten"); + } + + #[tokio::test] + async fn cannot_overwrite_built_in_env() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |_, node| { + if let ActionNode::RunTask(inner) = node { + inner.env.insert("PWD".into(), "overwritten".into()); + inner + .env + .insert("MOON_PROJECT_ID".into(), "overwritten".into()); + inner + .env + .insert("PROTO_VERSION".into(), "overwritten".into()); + } + }) + .await; + + assert_ne!(get_env(&command, "PWD").unwrap(), "overwritten"); + assert_ne!(get_env(&command, "MOON_PROJECT_ID").unwrap(), "overwritten"); + assert_ne!(get_env(&command, "PROTO_VERSION").unwrap(), "overwritten"); + } + } + + mod shell { + use super::*; + + #[tokio::test] + async fn uses_a_shell_by_default_for_system_task() { + let container = TaskRunnerContainer::new("builder").await; + let command = container.create_command(ActionContext::default()).await; + + assert!(command.shell.is_some()); + } + + #[tokio::test] + async fn sets_default_shell() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |task, _| { + task.options.shell = Some(true); + }) + .await; + + assert!(command.shell.is_some()); + } + + #[cfg(unix)] + #[tokio::test] + async fn can_set_unix_shell() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |task, _| { + task.options.shell = Some(true); + task.options.unix_shell = Some(moon_config::TaskUnixShell::Elvish); + }) + .await; + + assert!(command.shell.unwrap().bin.to_string_lossy().contains("elv")); + } + + #[cfg(windows)] + #[tokio::test] + async fn can_set_windows_shell() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |task, _| { + task.options.shell = Some(true); + task.options.windows_shell = Some(moon_config::TaskWindowsShell::Bash); + }) + .await; + + assert!(command + .shell + .unwrap() + .bin + .to_string_lossy() + .contains("bash")); + } + } + + mod affected { + use super::*; + + #[tokio::test] + async fn does_nothing_if_option_not_set() { + let container = TaskRunnerContainer::new("builder").await; + let command = container.create_command(ActionContext::default()).await; + + assert!(get_env(&command, "MOON_AFFECTED_FILES").is_none()); + } + + #[tokio::test] + async fn includes_touched_in_args() { + let container = TaskRunnerContainer::new("builder").await; + + let mut context = ActionContext::default(); + context.affected_only = true; + context.touched_files.insert("project/file.txt".into()); + + let command = container + .create_command_with_config(context, |task, _| { + task.options.affected_files = Some(TaskOptionAffectedFiles::Args); + }) + .await; + + assert_eq!(get_args(&command), vec!["arg", "--opt", "./file.txt"]); + } + + #[tokio::test] + async fn fallsback_to_dot_in_args_when_no_match() { + let container = TaskRunnerContainer::new("builder").await; + + let mut context = ActionContext::default(); + context.affected_only = true; + context.touched_files.insert("project/other.txt".into()); + + let command = container + .create_command_with_config(context, |task, _| { + task.options.affected_files = Some(TaskOptionAffectedFiles::Args); + }) + .await; + + assert_eq!(get_args(&command), vec!["arg", "--opt", "."]); + } + + #[tokio::test] + async fn includes_touched_in_env() { + let container = TaskRunnerContainer::new("builder").await; + + let mut context = ActionContext::default(); + context.affected_only = true; + context.touched_files.insert("project/file.txt".into()); + + let command = container + .create_command_with_config(context, |task, _| { + task.options.affected_files = Some(TaskOptionAffectedFiles::Env); + }) + .await; + + assert_eq!( + get_env(&command, "MOON_AFFECTED_FILES").unwrap(), + "file.txt" + ); + } + + #[tokio::test] + async fn fallsback_to_dot_in_env_when_no_match() { + let container = TaskRunnerContainer::new("builder").await; + + let mut context = ActionContext::default(); + context.affected_only = true; + context.touched_files.insert("project/other.txt".into()); + + let command = container + .create_command_with_config(context, |task, _| { + task.options.affected_files = Some(TaskOptionAffectedFiles::Env); + }) + .await; + + assert_eq!(get_env(&command, "MOON_AFFECTED_FILES").unwrap(), "."); + } + + #[tokio::test] + async fn can_use_inputs_directly_when_not_affected() { + let container = TaskRunnerContainer::new("builder").await; + let command = container + .create_command_with_config(ActionContext::default(), |task, _| { + task.options.affected_files = Some(TaskOptionAffectedFiles::Args); + task.options.affected_pass_inputs = true; + }) + .await; + + assert_eq!(get_args(&command), vec!["arg", "--opt", "./input.txt"]); + } + } +} diff --git a/nextgen/task-runner/tests/command_executor_test.rs b/nextgen/task-runner/tests/command_executor_test.rs new file mode 100644 index 00000000000..609e1cbc6dc --- /dev/null +++ b/nextgen/task-runner/tests/command_executor_test.rs @@ -0,0 +1,102 @@ +mod utils; + +use moon_action::{ActionStatus, OperationType}; +use moon_action_context::{ActionContext, TargetState}; +use utils::*; + +mod command_executor { + use super::*; + + #[tokio::test] + async fn returns_attempt_on_success() { + let container = TaskRunnerContainer::new_os("runner").await; + let context = ActionContext::default(); + + let result = container + .create_command_executor("success", &context) + .await + .execute(&context, Some("hash123")) + .await + .unwrap(); + + // Check state + assert!(result.error.is_none()); + assert_eq!(result.report_item.hash.unwrap(), "hash123"); + assert_eq!(result.report_item.attempt_current, 1); + assert_eq!(result.report_item.attempt_total, 1); + assert_eq!(result.run_state, TargetState::Passed("hash123".into())); + + // Check attempt + assert_eq!(result.attempts.len(), 1); + + let attempt = result.attempts.first().unwrap(); + let output = attempt.output.as_ref().unwrap(); + + assert_eq!(attempt.status, ActionStatus::Passed); + assert_eq!(attempt.type_of, OperationType::TaskExecution); + assert_eq!(output.exit_code.unwrap(), 0); + assert_eq!(output.stdout.as_ref().unwrap().trim(), "test"); + } + + #[tokio::test] + async fn returns_attempt_on_failure() { + let container = TaskRunnerContainer::new_os("runner").await; + let context = ActionContext::default(); + + let result = container + .create_command_executor("failure", &context) + .await + .execute(&context, Some("hash123")) + .await + .unwrap(); + + // Check state + assert!(result.error.is_none()); + assert_eq!(result.report_item.hash.unwrap(), "hash123"); + assert_eq!(result.report_item.attempt_current, 1); + assert_eq!(result.report_item.attempt_total, 1); + assert_eq!(result.run_state, TargetState::Failed); + + // Check attempt + assert_eq!(result.attempts.len(), 1); + + let attempt = result.attempts.first().unwrap(); + let output = attempt.output.as_ref().unwrap(); + + assert_eq!(attempt.status, ActionStatus::Failed); + assert_eq!(attempt.type_of, OperationType::TaskExecution); + assert_eq!(output.exit_code.unwrap(), 1); + } + + #[tokio::test] + async fn returns_attempts_for_each_retry() { + let container = TaskRunnerContainer::new_os("runner").await; + let context = ActionContext::default(); + + let result = container + .create_command_executor("retry", &context) + .await + .execute(&context, None) + .await + .unwrap(); + + // Check state + assert!(result.error.is_none()); + assert!(result.report_item.hash.is_none()); + assert_eq!(result.report_item.attempt_current, 4); + assert_eq!(result.report_item.attempt_total, 4); + assert_eq!(result.run_state, TargetState::Failed); + + // Check attempt + assert_eq!(result.attempts.len(), 4); + + for i in 0..4 { + let attempt = &result.attempts[i]; + let output = attempt.output.as_ref().unwrap(); + + assert_eq!(attempt.status, ActionStatus::Failed); + assert_eq!(attempt.type_of, OperationType::TaskExecution); + assert_eq!(output.exit_code.unwrap(), 1); + } + } +} diff --git a/nextgen/task-runner/tests/output_archiver_test.rs b/nextgen/task-runner/tests/output_archiver_test.rs new file mode 100644 index 00000000000..dcc37cbccf1 --- /dev/null +++ b/nextgen/task-runner/tests/output_archiver_test.rs @@ -0,0 +1,534 @@ +mod utils; + +use moon_task::Target; +use starbase_archive::Archiver; +use std::env; +use std::fs; +use std::sync::Arc; +use utils::*; + +mod output_archiver { + use super::*; + + mod pack { + use super::*; + + #[tokio::test] + async fn does_nothing_if_no_outputs_in_task() { + let container = TaskRunnerContainer::new("archive").await; + let archiver = container.create_archiver("no-outputs"); + + assert!(archiver.archive("hash123").await.unwrap().is_none()); + } + + #[tokio::test] + #[should_panic( + expected = "Task project:file-outputs defines outputs, but none exist after being ran." + )] + async fn errors_if_outputs_not_created() { + let container = TaskRunnerContainer::new("archive").await; + let archiver = container.create_archiver("file-outputs"); + + archiver.archive("hash123").await.unwrap(); + } + + #[tokio::test] + async fn creates_an_archive() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("file-outputs"); + + assert!(archiver.archive("hash123").await.unwrap().is_some()); + assert!(container + .sandbox + .path() + .join(".moon/cache/outputs/hash123.tar.gz") + .exists()); + } + + #[tokio::test] + async fn doesnt_create_an_archive_if_it_exists() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + container + .sandbox + .create_file(".moon/cache/outputs/hash123.tar.gz", ""); + + let archiver = container.create_archiver("file-outputs"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + + assert_eq!(fs::metadata(file).unwrap().len(), 0); + } + + #[tokio::test] + async fn doesnt_create_an_archive_if_cache_disabled() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("file-outputs"); + + env::set_var("MOON_CACHE", "off"); + + assert!(archiver.archive("hash123").await.unwrap().is_none()); + + env::remove_var("MOON_CACHE"); + } + + #[tokio::test] + async fn doesnt_create_an_archive_if_cache_read_only() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("file-outputs"); + + env::set_var("MOON_CACHE", "read"); + + assert!(archiver.archive("hash123").await.unwrap().is_none()); + + env::remove_var("MOON_CACHE"); + } + + #[tokio::test] + async fn includes_input_files_in_archive() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("file-outputs"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/file.txt").exists()); + } + + #[tokio::test] + async fn includes_input_globs_in_archive() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/one.txt", ""); + container.sandbox.create_file("project/two.txt", ""); + container.sandbox.create_file("project/three.txt", ""); + + let archiver = container.create_archiver("glob-outputs"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/one.txt").exists()); + assert!(dir.join("project/two.txt").exists()); + assert!(dir.join("project/three.txt").exists()); + } + + #[tokio::test] + async fn includes_std_logs_in_archive() { + let container = TaskRunnerContainer::new("archive").await; + container + .sandbox + .create_file(".moon/cache/states/project/file-outputs/stdout.log", "out"); + container + .sandbox + .create_file(".moon/cache/states/project/file-outputs/stderr.log", "err"); + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("file-outputs"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + let err = dir.join(".moon/cache/states/project/file-outputs/stderr.log"); + let out = dir.join(".moon/cache/states/project/file-outputs/stdout.log"); + + assert!(err.exists()); + assert!(out.exists()); + assert_eq!(fs::read_to_string(err).unwrap(), "err"); + assert_eq!(fs::read_to_string(out).unwrap(), "out"); + } + + #[tokio::test] + async fn can_ignore_output_files_with_negation() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/a.txt", ""); + container.sandbox.create_file("project/b.txt", ""); + container.sandbox.create_file("project/c.txt", ""); + + let archiver = container.create_archiver("file-outputs-negated"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/a.txt").exists()); + assert!(!dir.join("project/b.txt").exists()); + assert!(dir.join("project/c.txt").exists()); + } + + #[tokio::test] + async fn can_ignore_output_globs_with_negation() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/a.txt", ""); + container.sandbox.create_file("project/b.txt", ""); + container.sandbox.create_file("project/c.txt", ""); + + let archiver = container.create_archiver("glob-outputs-negated"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/a.txt").exists()); + assert!(!dir.join("project/b.txt").exists()); + assert!(dir.join("project/c.txt").exists()); + } + + #[tokio::test] + async fn caches_one_file() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("output-one-file"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/file.txt").exists()); + } + + #[tokio::test] + async fn caches_many_files() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/a.txt", ""); + container.sandbox.create_file("project/b.txt", ""); + container.sandbox.create_file("project/c.txt", ""); + + let archiver = container.create_archiver("output-many-files"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/a.txt").exists()); + assert!(dir.join("project/b.txt").exists()); + assert!(dir.join("project/c.txt").exists()); + } + + #[tokio::test] + async fn caches_one_directory() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/dir/file.txt", ""); + + let archiver = container.create_archiver("output-one-dir"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/dir/file.txt").exists()); + } + + #[tokio::test] + async fn caches_many_directories() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/a/file.txt", ""); + container.sandbox.create_file("project/b/file.txt", ""); + container.sandbox.create_file("project/c/file.txt", ""); + + let archiver = container.create_archiver("output-many-dirs"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/a/file.txt").exists()); + assert!(dir.join("project/b/file.txt").exists()); + assert!(dir.join("project/c/file.txt").exists()); + } + + #[tokio::test] + async fn caches_file_and_directory() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + container.sandbox.create_file("project/dir/file.txt", ""); + + let archiver = container.create_archiver("output-file-and-dir"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("project/file.txt").exists()); + assert!(dir.join("project/dir/file.txt").exists()); + } + + #[tokio::test] + async fn caches_files_from_workspace() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("root.txt", ""); + container.sandbox.create_file("shared/a.txt", ""); + container.sandbox.create_file("shared/z.txt", ""); + + let archiver = container.create_archiver("output-workspace"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("root.txt").exists()); + assert!(dir.join("shared/a.txt").exists()); + assert!(dir.join("shared/z.txt").exists()); + } + + #[tokio::test] + async fn caches_files_from_workspace_and_project() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("root.txt", ""); + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("output-workspace-and-project"); + let file = archiver.archive("hash123").await.unwrap().unwrap(); + let dir = container.sandbox.path().join("out"); + + Archiver::new(&dir, &file).unpack_from_ext().unwrap(); + + assert!(dir.join("root.txt").exists()); + assert!(dir.join("project/file.txt").exists()); + } + } + + mod is_archivable { + use super::*; + + #[tokio::test] + async fn returns_based_on_type() { + let container = TaskRunnerContainer::new("archive").await; + let archiver = container.create_archiver("build-type"); + + assert!(archiver.is_archivable().unwrap()); + + let archiver = container.create_archiver("run-type"); + + assert!(!archiver.is_archivable().unwrap()); + + let archiver = container.create_archiver("test-type"); + + assert!(!archiver.is_archivable().unwrap()); + } + + #[tokio::test] + async fn can_return_true_for_run_type_if_workspace_configured() { + let mut container = TaskRunnerContainer::new("archive").await; + + // Project scope + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::new("project", "run-type").unwrap()); + } + + let archiver = container.create_archiver("run-type"); + + assert!(archiver.is_archivable().unwrap()); + } + + #[tokio::test] + async fn can_return_true_for_test_type_if_workspace_configured() { + let mut container = TaskRunnerContainer::new("archive").await; + + // All scope + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::parse(":test-type").unwrap()); + } + + let archiver = container.create_archiver("test-type"); + + assert!(archiver.is_archivable().unwrap()); + } + + #[tokio::test] + async fn matches_all_config() { + let mut container = TaskRunnerContainer::new("archive").await; + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::parse(":no-outputs").unwrap()); + } + + let archiver = container.create_archiver("no-outputs"); + + assert!(archiver.is_archivable().unwrap()); + } + + #[tokio::test] + async fn doesnt_match_all_config() { + let mut container = TaskRunnerContainer::new("archive").await; + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::parse(":unknown-task").unwrap()); + } + + let archiver = container.create_archiver("no-outputs"); + + assert!(!archiver.is_archivable().unwrap()); + } + + #[tokio::test] + async fn matches_project_config() { + let mut container = TaskRunnerContainer::new("archive").await; + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::new("project", "no-outputs").unwrap()); + } + + let archiver = container.create_archiver("no-outputs"); + + assert!(archiver.is_archivable().unwrap()); + } + + #[tokio::test] + async fn doesnt_match_project_config() { + let mut container = TaskRunnerContainer::new("archive").await; + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::new("other-project", "no-outputs").unwrap()); + } + + let archiver = container.create_archiver("no-outputs"); + + assert!(!archiver.is_archivable().unwrap()); + } + + #[tokio::test] + async fn matches_tag_config() { + let mut container = TaskRunnerContainer::new("archive").await; + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::parse("#cache:no-outputs").unwrap()); + } + + let archiver = container.create_archiver("no-outputs"); + + assert!(archiver.is_archivable().unwrap()); + } + + #[tokio::test] + async fn doesnt_match_tag_config() { + let mut container = TaskRunnerContainer::new("archive").await; + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::parse("#other-tag:no-outputs").unwrap()); + } + + let archiver = container.create_archiver("no-outputs"); + + assert!(!archiver.is_archivable().unwrap()); + } + + #[tokio::test] + #[should_panic(expected = "Dependencies scope (^:) is not supported in run contexts.")] + async fn errors_for_deps_config() { + let mut container = TaskRunnerContainer::new("archive").await; + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::parse("^:no-outputs").unwrap()); + } + + let archiver = container.create_archiver("no-outputs"); + + assert!(!archiver.is_archivable().unwrap()); + } + + #[tokio::test] + #[should_panic(expected = "Self scope (~:) is not supported in run contexts.")] + async fn errors_for_self_config() { + let mut container = TaskRunnerContainer::new("archive").await; + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::parse("~:no-outputs").unwrap()); + } + + let archiver = container.create_archiver("no-outputs"); + + assert!(!archiver.is_archivable().unwrap()); + } + } + + mod has_outputs { + use super::*; + + #[tokio::test] + async fn returns_false_if_no_files() { + let container = TaskRunnerContainer::new("archive").await; + let archiver = container.create_archiver("file-outputs"); + + assert!(!archiver.has_outputs_been_created(false).unwrap()); + } + + #[tokio::test] + async fn returns_true_if_files() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("file-outputs"); + + assert!(archiver.has_outputs_been_created(false).unwrap()); + } + + #[tokio::test] + async fn returns_false_if_no_globs() { + let container = TaskRunnerContainer::new("archive").await; + let archiver = container.create_archiver("glob-outputs"); + + assert!(!archiver.has_outputs_been_created(false).unwrap()); + } + + #[tokio::test] + async fn returns_true_if_globs() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("glob-outputs"); + + assert!(archiver.has_outputs_been_created(false).unwrap()); + } + + #[tokio::test] + async fn returns_true_if_only_negated_globs() { + let container = TaskRunnerContainer::new("archive").await; + container.sandbox.create_file("project/file.txt", ""); + + let archiver = container.create_archiver("negated-outputs-only"); + + assert!(archiver.has_outputs_been_created(false).unwrap()); + } + } +} diff --git a/nextgen/task-runner/tests/output_hydrater_test.rs b/nextgen/task-runner/tests/output_hydrater_test.rs new file mode 100644 index 00000000000..c0fb3ae1896 --- /dev/null +++ b/nextgen/task-runner/tests/output_hydrater_test.rs @@ -0,0 +1,110 @@ +mod utils; + +use moon_task_runner::output_hydrater::HydrateFrom; +use std::env; +use utils::*; + +mod output_hydrater { + use super::*; + + mod unpack { + use super::*; + + #[tokio::test] + async fn does_nothing_if_no_hash() { + let container = TaskRunnerContainer::new("archive").await; + let hydrater = container.create_hydrator("file-outputs"); + + assert!(!hydrater.hydrate("", HydrateFrom::LocalCache).await.unwrap()); + } + + #[tokio::test] + async fn does_nothing_if_from_prev_outputs() { + let container = TaskRunnerContainer::new("archive").await; + let hydrater = container.create_hydrator("file-outputs"); + + assert!(hydrater + .hydrate("hash123", HydrateFrom::PreviousOutput) + .await + .unwrap()); + } + + #[tokio::test] + async fn doesnt_unpack_if_cache_disabled() { + let container = TaskRunnerContainer::new("archive").await; + container + .sandbox + .create_file(".moon/cache/outputs/hash123.tar.gz", ""); + + let hydrater = container.create_hydrator("file-outputs"); + + env::set_var("MOON_CACHE", "off"); + + assert!(!hydrater + .hydrate("hash123", HydrateFrom::LocalCache) + .await + .unwrap()); + + env::remove_var("MOON_CACHE"); + } + + #[tokio::test] + async fn doesnt_unpack_if_cache_write_only() { + let container = TaskRunnerContainer::new("archive").await; + container + .sandbox + .create_file(".moon/cache/outputs/hash123.tar.gz", ""); + + let hydrater = container.create_hydrator("file-outputs"); + + env::set_var("MOON_CACHE", "write"); + + assert!(!hydrater + .hydrate("hash123", HydrateFrom::LocalCache) + .await + .unwrap()); + + env::remove_var("MOON_CACHE"); + } + + #[tokio::test] + async fn unpacks_archive_into_project() { + let container = TaskRunnerContainer::new("archive").await; + container.pack_archive("file-outputs"); + + assert!(!container.sandbox.path().join("project/file.txt").exists()); + + let hydrater = container.create_hydrator("file-outputs"); + hydrater + .hydrate("hash123", HydrateFrom::LocalCache) + .await + .unwrap(); + + assert!(container.sandbox.path().join("project/file.txt").exists()); + } + + #[tokio::test] + async fn unpacks_logs_from_archive() { + let container = TaskRunnerContainer::new("archive").await; + container.pack_archive("file-outputs"); + + assert!(!container + .sandbox + .path() + .join(".moon/cache/states/project/file-outputs/stdout.log") + .exists()); + + let hydrater = container.create_hydrator("file-outputs"); + hydrater + .hydrate("hash123", HydrateFrom::LocalCache) + .await + .unwrap(); + + assert!(container + .sandbox + .path() + .join(".moon/cache/states/project/file-outputs/stdout.log") + .exists()); + } + } +} diff --git a/nextgen/task-runner/tests/task_runner_test.rs b/nextgen/task-runner/tests/task_runner_test.rs new file mode 100644 index 00000000000..ed9356abf80 --- /dev/null +++ b/nextgen/task-runner/tests/task_runner_test.rs @@ -0,0 +1,1096 @@ +mod utils; + +use moon_action::{ActionStatus, OperationType}; +use moon_action_context::*; +use moon_task::Target; +use moon_task_runner::output_hydrater::HydrateFrom; +use moon_task_runner::TaskRunner; +use std::env; +use utils::*; + +mod task_runner { + use super::*; + + mod run { + use super::*; + + #[tokio::test] + async fn skips_if_noop() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + let node = container.create_action_node("base"); + let context = ActionContext::default(); + + runner.run(&context, &node).await.unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Passthrough + ); + } + + mod has_deps { + use super::*; + + #[tokio::test] + #[should_panic(expected = "Encountered a missing hash for target project:dep")] + async fn errors_if_dep_hasnt_ran() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("has-deps"); + let node = container.create_action_node("has-deps"); + let context = ActionContext::default(); + + runner.run(&context, &node).await.unwrap(); + } + + #[tokio::test] + async fn skips_if_dep_skipped() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("has-deps"); + let node = container.create_action_node("has-deps"); + + let context = ActionContext::default(); + context + .target_states + .insert(Target::new("project", "dep").unwrap(), TargetState::Skipped) + .unwrap(); + + runner.run(&context, &node).await.unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Skipped + ); + } + + #[tokio::test] + async fn skips_if_dep_failed() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("has-deps"); + let node = container.create_action_node("has-deps"); + + let context = ActionContext::default(); + context + .target_states + .insert(Target::new("project", "dep").unwrap(), TargetState::Failed) + .unwrap(); + + runner.run(&context, &node).await.unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Skipped + ); + } + } + + mod with_cache { + use super::*; + + #[tokio::test] + async fn creates_cache_state_file() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("create-file"); + let node = container.create_action_node("create-file"); + let context = ActionContext::default(); + + runner.run(&context, &node).await.unwrap(); + + assert!(container + .sandbox + .path() + .join(".moon/cache/states") + .join(container.project_id) + .join("create-file/lastRun.json") + .exists()); + } + + #[tokio::test] + async fn generates_a_hash() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("create-file"); + let node = container.create_action_node("create-file"); + let context = ActionContext::default(); + + let result = runner.run(&context, &node).await.unwrap(); + + assert!(result.hash.is_some()); + } + + #[tokio::test] + async fn generates_same_hashes_based_on_input() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("hash-inputs"); + let node = container.create_action_node("hash-inputs"); + let context = ActionContext::default(); + + container + .sandbox + .create_file(format!("{}/file.txt", container.project_id), "same"); + + let a = runner.run(&context, &node).await.unwrap(); + let b = runner.run(&context, &node).await.unwrap(); + + assert_eq!(a.hash, b.hash); + } + + #[tokio::test] + async fn generates_different_hashes_based_on_input() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("hash-inputs"); + let node = container.create_action_node("hash-inputs"); + let context = ActionContext::default(); + + container + .sandbox + .create_file(format!("{}/file.txt", container.project_id), "before"); + + let a = runner.run(&context, &node).await.unwrap(); + + container + .sandbox + .create_file(format!("{}/file.txt", container.project_id), "after"); + + let b = runner.run(&context, &node).await.unwrap(); + + assert_ne!(a.hash, b.hash); + } + + #[tokio::test] + async fn creates_operations_for_each_step() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("create-file"); + let node = container.create_action_node("create-file"); + let context = ActionContext::default(); + + let result = runner.run(&context, &node).await.unwrap(); + + assert_eq!(result.operations.len(), 4); + assert_eq!(result.operations[0].type_of, OperationType::HashGeneration); + assert_eq!(result.operations[1].type_of, OperationType::OutputHydration); + assert_eq!(result.operations[2].type_of, OperationType::TaskExecution); + assert_eq!(result.operations[3].type_of, OperationType::ArchiveCreation); + assert_eq!(result.operations[0].status, ActionStatus::Passed); + assert_eq!(result.operations[1].status, ActionStatus::Skipped); + assert_eq!(result.operations[2].status, ActionStatus::Passed); + assert_eq!(result.operations[3].status, ActionStatus::Passed); + } + + #[tokio::test] + async fn running_again_hits_the_output_cache() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("create-file"); + let node = container.create_action_node("create-file"); + let context = ActionContext::default(); + + let before = runner.run(&context, &node).await.unwrap(); + + assert_eq!(before.operations.len(), 4); + + let result = runner.run(&context, &node).await.unwrap(); + + assert_eq!(before.hash, result.hash); + assert_eq!(result.operations.len(), 2); + assert_eq!(result.operations[0].type_of, OperationType::HashGeneration); + assert_eq!(result.operations[1].type_of, OperationType::OutputHydration); + assert_eq!(result.operations[0].status, ActionStatus::Passed); + assert_eq!(result.operations[1].status, ActionStatus::Cached); + } + + #[tokio::test] + #[should_panic(expected = "defines outputs, but none exist")] + async fn errors_if_outputs_missing() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("missing-output"); + let node = container.create_action_node("missing-output"); + let context = ActionContext::default(); + + runner.run(&context, &node).await.unwrap(); + } + + #[tokio::test] + #[should_panic(expected = "defines outputs, but none exist")] + async fn errors_if_outputs_missing_via_glob() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("missing-output-glob"); + let node = container.create_action_node("missing-output-glob"); + let context = ActionContext::default(); + + runner.run(&context, &node).await.unwrap(); + } + } + + mod without_cache { + use super::*; + + #[tokio::test] + async fn creates_cache_state_file() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("without-cache"); + let node = container.create_action_node("without-cache"); + let context = ActionContext::default(); + + runner.run(&context, &node).await.unwrap(); + + assert!(container + .sandbox + .path() + .join(".moon/cache/states") + .join(container.project_id) + .join("without-cache/lastRun.json") + .exists()); + } + + #[tokio::test] + async fn doesnt_generate_a_hash() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("without-cache"); + let node = container.create_action_node("without-cache"); + let context = ActionContext::default(); + + let result = runner.run(&context, &node).await.unwrap(); + + assert!(result.hash.is_none()); + } + + #[tokio::test] + async fn doesnt_create_non_task_operations() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("without-cache"); + let node = container.create_action_node("without-cache"); + let context = ActionContext::default(); + + let result = runner.run(&context, &node).await.unwrap(); + + assert!(result + .operations + .iter() + .all(|op| matches!(op.type_of, OperationType::TaskExecution))); + } + + #[tokio::test] + async fn running_again_reexecutes_task() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("without-cache"); + let node = container.create_action_node("without-cache"); + let context = ActionContext::default(); + + let result = runner.run(&context, &node).await.unwrap(); + + assert_eq!(result.operations.len(), 1); + assert_eq!(result.operations[0].type_of, OperationType::TaskExecution); + assert_eq!(result.operations[0].status, ActionStatus::Passed); + + let result = runner.run(&context, &node).await.unwrap(); + + assert_eq!(result.operations.len(), 1); + assert_eq!(result.operations[0].type_of, OperationType::TaskExecution); + assert_eq!(result.operations[0].status, ActionStatus::Passed); + } + } + } + + mod is_cached { + use super::*; + + #[tokio::test] + async fn returns_none_by_default() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + assert_eq!(runner.is_cached("hash123").await.unwrap(), None); + } + + #[tokio::test] + async fn sets_the_hash_to_cache() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + runner.is_cached("hash123").await.unwrap(); + + assert_eq!(runner.cache.data.hash, "hash123"); + } + + mod previous_output { + use super::*; + + #[tokio::test] + async fn returns_if_hashes_match() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + runner.cache.data.exit_code = 0; + runner.cache.data.hash = "hash123".into(); + + assert_eq!( + runner.is_cached("hash123").await.unwrap(), + Some(HydrateFrom::PreviousOutput) + ); + } + + #[tokio::test] + async fn skips_if_hashes_dont_match() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + runner.cache.data.exit_code = 0; + runner.cache.data.hash = "otherhash456".into(); + + assert_eq!(runner.is_cached("hash123").await.unwrap(), None); + } + + #[tokio::test] + async fn skips_if_codes_dont_match() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + runner.cache.data.exit_code = 2; + runner.cache.data.hash = "hash123".into(); + + assert_eq!(runner.is_cached("hash123").await.unwrap(), None); + } + + #[tokio::test] + async fn skips_if_outputs_dont_exist() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("outputs"); + + runner.cache.data.exit_code = 0; + runner.cache.data.hash = "hash123".into(); + + assert_eq!(runner.is_cached("hash123").await.unwrap(), None); + } + + #[tokio::test] + async fn returns_if_outputs_do_exist() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + runner.cache.data.exit_code = 0; + runner.cache.data.hash = "hash123".into(); + container.sandbox.create_file("project/file.txt", ""); + + assert_eq!( + runner.is_cached("hash123").await.unwrap(), + Some(HydrateFrom::PreviousOutput) + ); + } + + #[tokio::test] + async fn returns_none_if_non_zero_exit() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + runner.cache.data.exit_code = 1; + + assert_eq!(runner.is_cached("hash123").await.unwrap(), None); + } + } + + mod local_cache { + use super::*; + + #[tokio::test] + async fn returns_if_archive_exists() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + container + .sandbox + .create_file(".moon/cache/outputs/hash123.tar.gz", ""); + + assert_eq!( + runner.is_cached("hash123").await.unwrap(), + Some(HydrateFrom::LocalCache) + ); + } + + #[tokio::test] + async fn skips_if_archive_doesnt_exist() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + assert_eq!(runner.is_cached("hash123").await.unwrap(), None); + } + + #[tokio::test] + async fn skips_if_cache_isnt_readable() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + container + .sandbox + .create_file(".moon/cache/outputs/hash123.tar.gz", ""); + + env::set_var("MOON_CACHE", "off"); + + assert_eq!(runner.is_cached("hash123").await.unwrap(), None); + + env::remove_var("MOON_CACHE"); + } + + #[tokio::test] + async fn skips_if_cache_is_writeonly() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + + container + .sandbox + .create_file(".moon/cache/outputs/hash123.tar.gz", ""); + + env::set_var("MOON_CACHE", "write"); + + assert_eq!(runner.is_cached("hash123").await.unwrap(), None); + + env::remove_var("MOON_CACHE"); + } + } + } + + mod is_dependencies_complete { + use super::*; + + #[tokio::test] + async fn returns_true_if_no_deps() { + let container = TaskRunnerContainer::new("runner").await; + let runner = container.create_runner("no-deps"); + let context = ActionContext::default(); + + assert!(runner.is_dependencies_complete(&context).unwrap()); + } + + #[tokio::test] + async fn returns_false_if_dep_failed() { + let container = TaskRunnerContainer::new("runner").await; + let runner = container.create_runner("has-deps"); + let context = ActionContext::default(); + + context + .target_states + .insert(Target::new("project", "dep").unwrap(), TargetState::Failed) + .unwrap(); + + assert!(!runner.is_dependencies_complete(&context).unwrap()); + } + + #[tokio::test] + async fn returns_false_if_dep_skipped() { + let container = TaskRunnerContainer::new("runner").await; + let runner = container.create_runner("has-deps"); + let context = ActionContext::default(); + + context + .target_states + .insert(Target::new("project", "dep").unwrap(), TargetState::Skipped) + .unwrap(); + + assert!(!runner.is_dependencies_complete(&context).unwrap()); + } + + #[tokio::test] + async fn returns_true_if_dep_passed() { + let container = TaskRunnerContainer::new("runner").await; + let runner = container.create_runner("no-deps"); + let context = ActionContext::default(); + + context + .target_states + .insert( + Target::new("project", "dep").unwrap(), + TargetState::Passed("hash123".into()), + ) + .unwrap(); + + assert!(runner.is_dependencies_complete(&context).unwrap()); + } + + #[tokio::test] + #[should_panic(expected = "Encountered a missing hash for target project:dep")] + async fn errors_if_dep_not_ran() { + let container = TaskRunnerContainer::new("runner").await; + let runner = container.create_runner("has-deps"); + let context = ActionContext::default(); + + runner.is_dependencies_complete(&context).unwrap(); + } + } + + mod generate_hash { + use super::*; + + #[tokio::test] + async fn generates_a_hash() { + let container = TaskRunnerContainer::new("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("base"); + let context = ActionContext::default(); + let node = container.create_action_node("base"); + + let hash = runner.generate_hash(&context, &node).await.unwrap(); + + // 64 bytes + assert_eq!(hash.len(), 64); + } + + #[tokio::test] + async fn generates_a_different_hash_via_passthrough_args() { + let container = TaskRunnerContainer::new("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("base"); + let mut context = ActionContext::default(); + let node = container.create_action_node("base"); + + let before_hash = runner.generate_hash(&context, &node).await.unwrap(); + + context + .primary_targets + .insert(Target::new("project", "base").unwrap()); + context.passthrough_args.push("--extra".into()); + + let after_hash = runner.generate_hash(&context, &node).await.unwrap(); + + assert_ne!(before_hash, after_hash); + } + + #[tokio::test] + async fn creates_an_operation() { + let container = TaskRunnerContainer::new("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("base"); + let context = ActionContext::default(); + let node = container.create_action_node("base"); + + runner.generate_hash(&context, &node).await.unwrap(); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::HashGeneration); + assert_eq!(operation.status, ActionStatus::Passed); + } + + #[tokio::test] + async fn creates_a_manifest_file() { + let container = TaskRunnerContainer::new("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("base"); + let context = ActionContext::default(); + let node = container.create_action_node("base"); + + let hash = runner.generate_hash(&context, &node).await.unwrap(); + + assert!(container + .sandbox + .path() + .join(".moon/cache/hashes") + .join(format!("{hash}.json")) + .exists()); + } + } + + mod execute { + use super::*; + + #[tokio::test] + async fn executes_and_sets_success_state() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("success"); + let node = container.create_action_node("success"); + let context = ActionContext::default(); + + runner + .execute(&context, &node, Some("hash123")) + .await + .unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Passed("hash123".into()) + ); + } + + #[tokio::test] + async fn executes_and_sets_success_state_without_hash() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("success"); + let node = container.create_action_node("success"); + let context = ActionContext::default(); + + runner.execute(&context, &node, None).await.unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Passthrough + ); + } + + #[tokio::test] + async fn executes_and_sets_failed_state() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("failure"); + let node = container.create_action_node("failure"); + let context = ActionContext::default(); + + // Swallow panic so we can check operations + let _ = runner.execute(&context, &node, Some("hash123")).await; + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Failed + ); + } + + #[tokio::test] + async fn executes_and_creates_operation_on_success() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("success"); + let node = container.create_action_node("success"); + let context = ActionContext::default(); + + runner + .execute(&context, &node, Some("hash123")) + .await + .unwrap(); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::TaskExecution); + assert_eq!(operation.status, ActionStatus::Passed); + + let output = operation.output.as_ref().unwrap(); + + assert_eq!(output.exit_code, Some(0)); + assert_eq!(output.stdout.as_ref().unwrap().trim(), "test"); + } + + #[tokio::test] + async fn executes_and_creates_operation_on_failure() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("failure"); + let node = container.create_action_node("failure"); + let context = ActionContext::default(); + + // Swallow panic so we can check operations + let _ = runner.execute(&context, &node, Some("hash123")).await; + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::TaskExecution); + assert_eq!(operation.status, ActionStatus::Failed); + + let output = operation.output.as_ref().unwrap(); + + assert_eq!(output.exit_code, Some(1)); + } + + #[tokio::test] + async fn saves_stdlog_file_to_cache() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("success"); + let node = container.create_action_node("success"); + let context = ActionContext::default(); + + runner + .execute(&context, &node, Some("hash123")) + .await + .unwrap(); + + assert!(container + .sandbox + .path() + .join(".moon/cache/states") + .join(container.project_id) + .join("success/stdout.log") + .exists()); + } + + #[tokio::test] + async fn creates_operation_for_mutex_acquire() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("with-mutex"); + let node = container.create_action_node("with-mutex"); + let context = ActionContext::default(); + + // Swallow panic so we can check operations + let _ = runner.execute(&context, &node, Some("hash123")).await; + + let operation = runner + .operations + .iter() + .find(|a| matches!(a.type_of, OperationType::MutexAcquisition)) + .unwrap(); + + assert_eq!(operation.type_of, OperationType::MutexAcquisition); + assert_eq!(operation.status, ActionStatus::Passed); + } + + #[tokio::test] + #[should_panic(expected = "failed to run")] + async fn errors_when_task_exec_fails() { + let container = TaskRunnerContainer::new_os("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("failure"); + let node = container.create_action_node("failure"); + let context = ActionContext::default(); + + runner + .execute(&context, &node, Some("hash123")) + .await + .unwrap(); + } + } + + mod skip { + use super::*; + + #[tokio::test] + async fn creates_an_operation() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + let context = ActionContext::default(); + + runner.skip(&context).unwrap(); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::TaskExecution); + assert_eq!(operation.status, ActionStatus::Skipped); + } + + #[tokio::test] + async fn sets_skipped_state() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + let context = ActionContext::default(); + + runner.skip(&context).unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Skipped + ); + } + } + + mod skip_noop { + use super::*; + + #[tokio::test] + async fn creates_an_operation() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + let context = ActionContext::default(); + + runner.skip_noop(&context).unwrap(); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::NoOperation); + assert_eq!(operation.status, ActionStatus::Passed); + } + + #[tokio::test] + async fn sets_passthrough_state() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("base"); + let context = ActionContext::default(); + + runner.skip_noop(&context).unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Passthrough + ); + } + } + + mod archive { + use super::*; + use std::sync::Arc; + + #[tokio::test] + async fn creates_a_passed_operation_if_archived() { + let container = TaskRunnerContainer::new("runner").await; + container.sandbox.enable_git(); + container.sandbox.create_file("project/file.txt", ""); + + let mut runner = container.create_runner("outputs"); + let result = runner.archive("hash123").await.unwrap(); + + assert!(result); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::ArchiveCreation); + assert_eq!(operation.status, ActionStatus::Passed); + } + + #[tokio::test] + async fn creates_a_skipped_operation_if_not_archiveable() { + let container = TaskRunnerContainer::new("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("base"); + let result = runner.archive("hash123").await.unwrap(); + + assert!(!result); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::ArchiveCreation); + assert_eq!(operation.status, ActionStatus::Skipped); + } + + #[tokio::test] + async fn can_archive_tasks_without_outputs() { + let mut container = TaskRunnerContainer::new("runner").await; + container.sandbox.enable_git(); + + if let Some(config) = Arc::get_mut(&mut container.workspace.config) { + config + .runner + .archivable_targets + .push(Target::new("project", "base").unwrap()); + } + + let mut runner = container.create_runner("base"); + + assert!(runner.archive("hash123").await.unwrap()); + } + } + + mod hydrate { + use super::*; + + mod not_cached { + use super::*; + + #[tokio::test] + async fn creates_a_skipped_operation_if_no_cache() { + let container = TaskRunnerContainer::new("runner").await; + container.sandbox.enable_git(); + + let mut runner = container.create_runner("outputs"); + + let context = ActionContext::default(); + let result = runner.hydrate(&context, "hash123").await.unwrap(); + + assert!(!result); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::OutputHydration); + assert_eq!(operation.status, ActionStatus::Skipped); + } + } + + mod previous_output { + use super::*; + + fn setup_previous_state(container: &TaskRunnerContainer, runner: &mut TaskRunner) { + container.sandbox.enable_git(); + container.sandbox.create_file("project/file.txt", ""); + + runner.cache.data.exit_code = 0; + runner.cache.data.hash = "hash123".into(); + } + + #[tokio::test] + async fn creates_a_cached_operation() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("outputs"); + + setup_previous_state(&container, &mut runner); + + let context = ActionContext::default(); + let result = runner.hydrate(&context, "hash123").await.unwrap(); + + assert!(result); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::OutputHydration); + assert_eq!(operation.status, ActionStatus::Cached); + } + + #[tokio::test] + async fn sets_passed_state() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("outputs"); + + setup_previous_state(&container, &mut runner); + + let context = ActionContext::default(); + runner.hydrate(&context, "hash123").await.unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Passed("hash123".into()) + ); + } + } + + mod local_cache { + use std::fs; + + use super::*; + + fn setup_local_state(container: &TaskRunnerContainer, _runner: &mut TaskRunner) { + container.sandbox.enable_git(); + container.pack_archive("outputs"); + } + + #[tokio::test] + async fn creates_a_cached_operation() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("outputs"); + + setup_local_state(&container, &mut runner); + + let context = ActionContext::default(); + let result = runner.hydrate(&context, "hash123").await.unwrap(); + + assert!(result); + + let operation = runner.operations.last().unwrap(); + + assert_eq!(operation.type_of, OperationType::OutputHydration); + assert_eq!(operation.status, ActionStatus::Cached); + } + + #[tokio::test] + async fn sets_passed_state() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("outputs"); + + setup_local_state(&container, &mut runner); + + let context = ActionContext::default(); + runner.hydrate(&context, "hash123").await.unwrap(); + + assert_eq!( + context + .target_states + .get(&runner.task.target) + .unwrap() + .get(), + &TargetState::Passed("hash123".into()) + ); + } + + #[tokio::test] + async fn unpacks_archive_into_project() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("outputs"); + + setup_local_state(&container, &mut runner); + + let context = ActionContext::default(); + runner.hydrate(&context, "hash123").await.unwrap(); + + let output_file = container.sandbox.path().join("project/file.txt"); + + assert!(output_file.exists()); + assert_eq!(fs::read_to_string(output_file).unwrap(), "content"); + } + + #[tokio::test] + async fn loads_stdlogs_in_archive_into_operation() { + let container = TaskRunnerContainer::new("runner").await; + let mut runner = container.create_runner("outputs"); + + setup_local_state(&container, &mut runner); + + let context = ActionContext::default(); + let result = runner.hydrate(&context, "hash123").await.unwrap(); + + assert!(result); + + let operation = runner.operations.last().unwrap(); + let output = operation.output.as_ref().unwrap(); + + assert_eq!(output.exit_code.unwrap(), 0); + assert_eq!(output.stderr.as_deref().unwrap(), "stderr"); + assert_eq!(output.stdout.as_deref().unwrap(), "stdout"); + } + } + } +} diff --git a/nextgen/task-runner/tests/utils.rs b/nextgen/task-runner/tests/utils.rs new file mode 100644 index 00000000000..5224e9057b8 --- /dev/null +++ b/nextgen/task-runner/tests/utils.rs @@ -0,0 +1,187 @@ +#![allow(dead_code)] + +use moon_action::{ActionNode, RunTaskNode}; +use moon_action_context::ActionContext; +use moon_console::Console; +use moon_platform::{PlatformManager, Runtime}; +use moon_process::Command; +use moon_project::Project; +use moon_task::Task; +use moon_task_runner::command_builder::CommandBuilder; +use moon_task_runner::command_executor::CommandExecutor; +use moon_task_runner::output_archiver::OutputArchiver; +use moon_task_runner::output_hydrater::OutputHydrater; +use moon_task_runner::TaskRunner; +use moon_test_utils2::{ + generate_platform_manager_from_sandbox, generate_project_graph_from_sandbox, ProjectGraph, +}; +use moon_workspace::Workspace; +use proto_core::ProtoEnvironment; +use starbase_archive::Archiver; +use starbase_sandbox::{create_sandbox, Sandbox}; +use std::fs; +use std::path::{Path, PathBuf}; +use std::sync::Arc; + +pub fn create_workspace(root: &Path) -> Workspace { + Workspace::load_from(root, ProtoEnvironment::new_testing(root)).unwrap() +} + +pub fn create_node(task: &Task) -> ActionNode { + ActionNode::RunTask(Box::new(RunTaskNode::new( + task.target.clone(), + Runtime::system(), + ))) +} + +pub struct TaskRunnerContainer { + pub sandbox: Sandbox, + pub console: Arc, + pub platform_manager: PlatformManager, + pub project_graph: ProjectGraph, + pub project: Arc, + pub project_id: String, + pub workspace: Workspace, +} + +impl TaskRunnerContainer { + pub async fn new_for_project(fixture: &str, project_id: &str) -> Self { + let sandbox = create_sandbox(fixture); + let workspace = create_workspace(sandbox.path()); + let project_graph = generate_project_graph_from_sandbox(sandbox.path()).await; + let project = project_graph.get(project_id).unwrap(); + let platform_manager = generate_platform_manager_from_sandbox(sandbox.path()).await; + + Self { + sandbox, + console: Arc::new(Console::new_testing()), + platform_manager, + project_graph, + project, + project_id: project_id.to_owned(), + workspace, + } + } + + pub async fn new_os(fixture: &str) -> Self { + Self::new_for_project(fixture, if cfg!(windows) { "windows" } else { "unix" }).await + } + + pub async fn new(fixture: &str) -> Self { + Self::new_for_project(fixture, "project").await + } + + pub fn create_archiver(&self, task_id: &str) -> OutputArchiver { + let task = self.project.get_task(task_id).unwrap(); + + OutputArchiver { + project_config: &self.project.config, + task, + workspace: &self.workspace, + } + } + + pub fn create_hydrator(&self, task_id: &str) -> OutputHydrater { + let task = self.project.get_task(task_id).unwrap(); + + OutputHydrater { + task, + workspace: &self.workspace, + } + } + + pub async fn create_command(&self, context: ActionContext) -> Command { + self.create_command_with_config(context, |_, _| {}).await + } + + pub async fn create_command_with_config( + &self, + context: ActionContext, + mut op: impl FnMut(&mut Task, &mut ActionNode), + ) -> Command { + let mut task = self.project.get_task("base").unwrap().clone(); + let mut node = create_node(&task); + + op(&mut task, &mut node); + + self.internal_create_command(&context, &task, &node).await + } + + pub async fn create_command_executor( + &self, + task_id: &str, + context: &ActionContext, + ) -> CommandExecutor { + let task = self.project.get_task(task_id).unwrap(); + let node = create_node(task); + + CommandExecutor::new( + &self.workspace, + &self.project, + task, + &node, + self.console.clone(), + self.internal_create_command(context, task, &node).await, + ) + } + + pub fn create_runner(&self, task_id: &str) -> TaskRunner { + let task = self.project.get_task(task_id).unwrap(); + + let mut runner = + TaskRunner::new(&self.workspace, &self.project, task, self.console.clone()).unwrap(); + runner.set_platform_manager(&self.platform_manager); + runner + } + + pub fn create_action_node(&self, task_id: &str) -> ActionNode { + let task = self.project.get_task(task_id).unwrap(); + + create_node(task) + } + + pub fn pack_archive(&self, task_id: &str) -> PathBuf { + let sandbox = &self.sandbox; + let file = sandbox.path().join(".moon/cache/outputs/hash123.tar.gz"); + + let out = format!( + ".moon/cache/states/{}/{}/stdout.log", + self.project_id, task_id, + ); + + let err = format!( + ".moon/cache/states/{}/{}/stderr.log", + self.project_id, task_id, + ); + + let txt = format!("{}/file.txt", self.project_id); + + sandbox.create_file(&out, "stdout"); + sandbox.create_file(&err, "stderr"); + sandbox.create_file(&txt, "content"); + + let mut archiver = Archiver::new(sandbox.path(), &file); + archiver.add_source_file(&out, None); + archiver.add_source_file(&err, None); + archiver.add_source_file(&txt, None); + archiver.pack_from_ext().unwrap(); + + // Remove sources so we can test unpacking + fs::remove_file(sandbox.path().join(out)).unwrap(); + fs::remove_file(sandbox.path().join(err)).unwrap(); + fs::remove_file(sandbox.path().join(txt)).unwrap(); + + file + } + + async fn internal_create_command( + &self, + context: &ActionContext, + task: &Task, + node: &ActionNode, + ) -> Command { + let mut builder = CommandBuilder::new(&self.workspace, &self.project, task, node); + builder.set_platform_manager(&self.platform_manager); + builder.build(context).await.unwrap() + } +} diff --git a/nextgen/task/Cargo.toml b/nextgen/task/Cargo.toml index 3d95c89acb7..296b01c748d 100644 --- a/nextgen/task/Cargo.toml +++ b/nextgen/task/Cargo.toml @@ -22,3 +22,6 @@ once_cell = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/task/src/task.rs b/nextgen/task/src/task.rs index b7db278a2fb..f8e8aa23310 100644 --- a/nextgen/task/src/task.rs +++ b/nextgen/task/src/task.rs @@ -123,7 +123,7 @@ impl Task { if let Ok(var) = env::var(var_name) { if !var.is_empty() { debug!( - target = self.target.as_str(), + task = self.target.as_str(), env_key = var_name, env_val = var, "Affected by environment variable", @@ -139,7 +139,7 @@ impl Task { for file in touched_files { if self.input_files.contains(file) { debug!( - target = ?self.target, + task = self.target.as_str(), input = ?file, "Affected by input file", ); @@ -149,7 +149,7 @@ impl Task { if globset.matches(file.as_str()) { debug!( - target = self.target.as_str(), + task = self.target.as_str(), input = ?file, "Affected by input glob", ); @@ -158,16 +158,12 @@ impl Task { } } - debug!( - target = self.target.as_str(), - "Not affected by touched files" - ); + debug!(task = self.target.as_str(), "Not affected by touched files"); Ok(false) } - /// Return a list of all workspace-relative input files for - /// the task + /// Return a list of all workspace-relative input files. pub fn get_input_files( &self, workspace_root: &Path, @@ -175,10 +171,8 @@ impl Task { let mut list = vec![]; for path in &self.input_files { - let file = path.to_path(workspace_root); - // Detect if file actually exists - if file.is_file() { + if path.to_path(workspace_root).is_file() { list.push(path.to_owned()); } } @@ -191,16 +185,13 @@ impl Task { // Glob results are absolute paths! for file in walk_paths { - let path = + list.push( WorkspaceRelativePathBuf::from_path(file.strip_prefix(workspace_root).unwrap()) - .unwrap(); - - list.push(path); + .unwrap(), + ); } } - list.sort(); - Ok(list) } diff --git a/nextgen/test-utils/Cargo.toml b/nextgen/test-utils/Cargo.toml index 7d2f999ce34..593f3d56791 100644 --- a/nextgen/test-utils/Cargo.toml +++ b/nextgen/test-utils/Cargo.toml @@ -18,10 +18,13 @@ proto_core = { workspace = true } starbase_events = { workspace = true } starbase_sandbox = { workspace = true } -# TODO +# TODO Remove moon_platform = { path = "../../crates/core/platform" } moon_bun_platform = { path = "../../crates/bun/platform" } moon_deno_platform = { path = "../../crates/deno/platform" } moon_node_platform = { path = "../../crates/node/platform" } moon_rust_platform = { path = "../../crates/rust/platform" } moon_system_platform = { path = "../../crates/system/platform" } + +[lints] +workspace = true diff --git a/nextgen/time/Cargo.toml b/nextgen/time/Cargo.toml index be0ebc8edda..8c7f8b82f4b 100644 --- a/nextgen/time/Cargo.toml +++ b/nextgen/time/Cargo.toml @@ -12,3 +12,6 @@ publish = false chrono = { workspace = true } humantime = "2.1.0" miette = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/time/src/lib.rs b/nextgen/time/src/lib.rs index f871b33ec65..49218881794 100644 --- a/nextgen/time/src/lib.rs +++ b/nextgen/time/src/lib.rs @@ -1,7 +1,7 @@ -use std::time::{Duration, SystemTime}; - pub use chrono; pub use humantime::{format_duration, parse_duration}; +use std::env; +use std::time::{Duration, SystemTime}; pub fn now_timestamp() -> chrono::NaiveDateTime { chrono::Utc::now().naive_utc() @@ -22,16 +22,16 @@ pub fn is_stale(timestamp: u128, duration: Duration) -> bool { timestamp == 0 || now_millis() >= timestamp + duration.as_millis() } -pub fn elapsed(duration: Duration) -> String { - // if is_test_env() { - // return String::from("100ms"); // Snapshots - // } +pub fn elapsed_opt(duration: Duration) -> Option { + if env::var("MOON_TEST").is_ok() { + return Some("100ms".into()); // Snapshots + } let secs = duration.as_secs(); let nanos = duration.subsec_nanos(); if secs == 0 && nanos == 0 { - return String::from("0s"); + return None; } let years = secs / 31_557_600; @@ -75,10 +75,14 @@ pub fn elapsed(duration: Duration) -> String { } if parts.is_empty() { - parts.push(String::from("0s")) + return None; } - parts.join(" ") + Some(parts.join(" ")) +} + +pub fn elapsed(duration: Duration) -> String { + elapsed_opt(duration).unwrap_or_else(|| String::from("0s")) } #[cfg(test)] diff --git a/nextgen/vcs-hooks/Cargo.toml b/nextgen/vcs-hooks/Cargo.toml index 4acc0c651fc..372026ceaf7 100644 --- a/nextgen/vcs-hooks/Cargo.toml +++ b/nextgen/vcs-hooks/Cargo.toml @@ -23,3 +23,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/vcs/Cargo.toml b/nextgen/vcs/Cargo.toml index 3b92683a1b7..2aedada71ee 100644 --- a/nextgen/vcs/Cargo.toml +++ b/nextgen/vcs/Cargo.toml @@ -27,3 +27,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } tokio = { workspace = true } + +[lints] +workspace = true diff --git a/nextgen/vcs/src/git.rs b/nextgen/vcs/src/git.rs index b210adf291e..22ef296345b 100644 --- a/nextgen/vcs/src/git.rs +++ b/nextgen/vcs/src/git.rs @@ -5,7 +5,7 @@ use async_trait::async_trait; use ignore::gitignore::{Gitignore, GitignoreBuilder}; use miette::{Diagnostic, IntoDiagnostic}; use moon_common::path::{RelativePathBuf, WorkspaceRelativePathBuf}; -use moon_common::{Style, Stylize}; +use moon_common::{is_test_env, Style, Stylize}; use once_cell::sync::Lazy; use regex::Regex; use rustc_hash::FxHashSet; @@ -342,7 +342,13 @@ impl Vcs for Git { .create_command(["hash-object", "--stdin-paths"]); command.input([slice.join("\n")]); - let output = self.process.run_command(command, true).await?; + let output = if is_test_env() { + self.process + .run_command_without_cache(command, true) + .await? + } else { + self.process.run_command(command, true).await? + }; for (i, hash) in output.split('\n').enumerate() { if !hash.is_empty() { diff --git a/nextgen/vcs/src/process_cache.rs b/nextgen/vcs/src/process_cache.rs index 6fbcbc61871..019f853b7b4 100644 --- a/nextgen/vcs/src/process_cache.rs +++ b/nextgen/vcs/src/process_cache.rs @@ -98,4 +98,16 @@ impl ProcessCache { Ok(cache) } + + pub async fn run_command_without_cache( + &self, + command: Command, + trim: bool, + ) -> miette::Result> { + let mut executor = command.create_async(); + let output = executor.exec_capture_output().await?; + let value = output_to_string(&output.stdout); + + Ok(Arc::new(if trim { value.trim().to_owned() } else { value })) + } } diff --git a/nextgen/workspace/Cargo.toml b/nextgen/workspace/Cargo.toml index e723b62a832..2a57430ef41 100644 --- a/nextgen/workspace/Cargo.toml +++ b/nextgen/workspace/Cargo.toml @@ -20,3 +20,6 @@ tracing = { workspace = true } [dev-dependencies] starbase_sandbox = { workspace = true } + +[lints] +workspace = true diff --git a/packages/cli/CHANGELOG.md b/packages/cli/CHANGELOG.md index 7fc349f0278..e7545c85483 100644 --- a/packages/cli/CHANGELOG.md +++ b/packages/cli/CHANGELOG.md @@ -12,6 +12,31 @@ ## Unreleased +#### 💥 Breaking + +- Removed the following webhook events associated with task outputs: `target-output.archiving`, + `target-output.archived`, `target-output.hydrating`, `target-output.hydrated`, + `target-output.cache-check`. + +#### 🚀 Updates + +- Rewrote the task runner from the ground up: + - Improved handling and reliability of output archiving and hydration. + - Now tracks metrics for individual operations, like hash generation, output hydration, task + execution, and more. Can be inspected in the run report. +- Added a `--summary` flag to `moon run` and `moon check` that will include a summary of all actions + that were processed/failed within the pipeline. This is the same output used in `moon ci`. +- Added a new console reporting layer that handles the rendering of output in the terminal. + - This enables us to support additional reporters in the future, each with unique UIs. + - Slightly tweaked our current UI rendering. You may notice some differences. + +#### 🐞 Fixes + +- Fixed an issue where actions within the run report were not reflecting the correct status of their + last execution attempt. +- Fixed an issue where "have outputs been created" checks would fail if outputs only contained + negated globs, coupled with literal paths. + #### ⚙️ Internal - Greatly reduced the amount of concurrent locks being held during task execution. May see slight diff --git a/packages/report/src/action.ts b/packages/report/src/action.ts index 5c32f186864..d38262476ea 100644 --- a/packages/report/src/action.ts +++ b/packages/report/src/action.ts @@ -33,16 +33,7 @@ export function hasPassed(status: ActionStatus): boolean { } export function isFlaky(action: Action): boolean { - if (action.flaky) { - return true; - } - - // The flaky field above didn't always exist! - if (!action.attempts || action.attempts.length === 0) { - return false; - } - - return hasPassed(action.status) && action.attempts.some((attempt) => hasFailed(attempt.status)); + return action.flaky || false; } export function isSlow(action: Action, slowThreshold: number): boolean { diff --git a/packages/report/tests/action.test.ts b/packages/report/tests/action.test.ts index f7968856f44..1a318731cc2 100644 --- a/packages/report/tests/action.test.ts +++ b/packages/report/tests/action.test.ts @@ -1,4 +1,4 @@ -import type { Action, Attempt } from '@moonrepo/types'; +import type { Action } from '@moonrepo/types'; import { isFlaky, isSlow } from '../src'; const action: Action = { @@ -14,44 +14,17 @@ const action: Action = { label: 'RunTask(app:build)', node: { action: 'sync-workspace', - params: {}, }, nodeIndex: 0, + operations: [], status: 'passed', finishedAt: '2022-09-12T22:50:12.932311Z', startedAt: '2022-09-12T22:50:12.932311Z', }; -const attempt: Attempt = { - duration: null, - exitCode: 0, - finishedAt: null, - index: 1, - startedAt: '', - status: 'running', - stdout: null, - stderr: null, -}; - describe('isFlaky()', () => { - it('returns false if no attempts', () => { + it('returns false by default', () => { expect(isFlaky({ ...action })).toBe(false); - expect(isFlaky({ ...action, attempts: [] })).toBe(false); - }); - - it('returns true if status is passed but an attempt failed', () => { - expect( - isFlaky({ - ...action, - attempts: [ - { - ...attempt, - status: 'failed', - }, - ], - status: 'passed', - }), - ).toBe(true); }); it('returns true if flaky field is true', () => { diff --git a/packages/report/tests/report.test.ts b/packages/report/tests/report.test.ts index de0819822c1..c59eeb18aa1 100644 --- a/packages/report/tests/report.test.ts +++ b/packages/report/tests/report.test.ts @@ -17,9 +17,9 @@ function mockReport(): RunReport { label: 'RunTask(types:build)', node: { action: 'sync-workspace', - params: {}, }, nodeIndex: 0, + operations: [], status: 'cached', finishedAt: '2022-09-12T22:50:12.932311Z', startedAt: '2022-09-12T22:50:12.932311Z', @@ -37,9 +37,9 @@ function mockReport(): RunReport { label: 'RunTask(runtime:typecheck)', node: { action: 'sync-workspace', - params: {}, }, nodeIndex: 1, + operations: [], status: 'passed', finishedAt: '2022-09-12T22:50:12.932311Z', startedAt: '2022-09-12T22:50:12.932311Z', @@ -57,9 +57,9 @@ function mockReport(): RunReport { label: 'RunTask(types:typecheck)', node: { action: 'sync-workspace', - params: {}, }, nodeIndex: 2, + operations: [], status: 'passed', finishedAt: '2022-09-12T22:50:12.932311Z', startedAt: '2022-09-12T22:50:12.932311Z', @@ -77,9 +77,9 @@ function mockReport(): RunReport { label: 'RunTask(website:typecheck)', node: { action: 'sync-workspace', - params: {}, }, nodeIndex: 3, + operations: [], status: 'passed', finishedAt: '2022-09-12T22:50:12.932311Z', startedAt: '2022-09-12T22:50:12.932311Z', diff --git a/packages/types/src/events.ts b/packages/types/src/events.ts index 15667ee51f4..a8833ed8dea 100644 --- a/packages/types/src/events.ts +++ b/packages/types/src/events.ts @@ -1,6 +1,6 @@ import type { Duration, Runtime } from './common'; import type { Action, ActionContext, ActionNode } from './pipeline'; -import type { Project, Task } from './project'; +import type { Project } from './project'; export interface ProviderEnvironment { baseBranch: string | null; @@ -31,11 +31,6 @@ export type EventType = | 'pipeline.started' | 'project.synced' | 'project.syncing' - | 'target-output.archived' - | 'target-output.archiving' - | 'target-output.cache-check' - | 'target-output.hydrated' - | 'target-output.hydrating' | 'target.ran' | 'target.running' | 'tool.installed' @@ -132,66 +127,6 @@ export interface EventTargetRan { export type PayloadTargetRan = WebhookPayload<'target.ran', EventTargetRan>; -export interface EventTargetOutputArchiving { - hash: string; - project: Project; - target: string; - task: Task; -} - -export type PayloadTargetOutputArchiving = WebhookPayload< - 'target-output.archiving', - EventTargetOutputArchiving ->; - -export interface EventTargetOutputArchived { - archivePath: string; - hash: string; - project: Project; - target: string; - task: Task; -} - -export type PayloadTargetOutputArchived = WebhookPayload< - 'target-output.archived', - EventTargetOutputArchived ->; - -export interface EventTargetOutputHydrating { - hash: string; - project: Project; - target: string; - task: Task; -} - -export type PayloadTargetOutputHydrating = WebhookPayload< - 'target-output.hydrating', - EventTargetOutputHydrating ->; - -export interface EventTargetOutputHydrated { - archivePath: string; - hash: string; - project: Project; - target: string; - task: Task; -} - -export type PayloadTargetOutputHydrated = WebhookPayload< - 'target-output.hydrated', - EventTargetOutputHydrated ->; - -export interface EventTargetOutputCacheCheck { - hash: string; - target: string; -} - -export type PayloadTargetOutputCacheCheck = WebhookPayload< - 'target-output.cache-check', - EventTargetOutputCacheCheck ->; - export interface EventToolInstalling { runtime: Runtime; } diff --git a/packages/types/src/pipeline.ts b/packages/types/src/pipeline.ts index ed15507e779..fc82d1dbf73 100644 --- a/packages/types/src/pipeline.ts +++ b/packages/types/src/pipeline.ts @@ -10,6 +10,7 @@ export type ActionStatus = | 'running' | 'skipped'; +/** @deprecated */ export interface Attempt { duration: Duration | null; exitCode: number | null; @@ -21,8 +22,33 @@ export interface Attempt { stdout: string | null; } +export type OperationType = + | 'archive-creation' + | 'hash-generation' + | 'mutex-acquisition' + | 'no-operation' + | 'output-hydration' + | 'task-execution'; + +export interface OperationOutput { + exitCode: number | null; + stderr: string | null; + stdout: string | null; +} + +export interface Operation { + duration: Duration | null; + finishedAt: string | null; + hash: string | null; + output: OperationOutput | null; + startedAt: string; + status: ActionStatus; + type: OperationType; +} + export interface Action { allowFailure: boolean; + /** @deprecated */ attempts: Attempt[] | null; createdAt: string; duration: Duration | null; @@ -32,12 +58,13 @@ export interface Action { label: string; node: ActionNode; nodeIndex: number; + operations: Operation[]; startedAt: string | null; status: ActionStatus; } export interface TargetState { - state: 'completed' | 'failed' | 'passthrough' | 'skipped'; + state: 'failed' | 'passed' | 'passthrough' | 'skipped'; hash?: string; } @@ -103,7 +130,7 @@ export interface ActionNodeRunTask { action: 'run-task'; params: { args: string[]; - env: [string, string][]; + env: Record; interactive: boolean; persistent: boolean; runtime: Runtime; @@ -128,7 +155,6 @@ export interface ActionNodeSyncProject { export interface ActionNodeSyncWorkspace { action: 'sync-workspace'; - params: {}; } // GRAPH diff --git a/rust-toolchain.toml b/rust-toolchain.toml index 7a231182a99..f92c249df98 100644 --- a/rust-toolchain.toml +++ b/rust-toolchain.toml @@ -1,3 +1,3 @@ [toolchain] profile = "default" -channel = "1.77.2" +channel = "1.78.0" diff --git a/website/docs/concepts/cache.mdx b/website/docs/concepts/cache.mdx index bc51d5d4822..e0eff600501 100644 --- a/website/docs/concepts/cache.mdx +++ b/website/docs/concepts/cache.mdx @@ -53,7 +53,7 @@ timeline, where every point in time will have its own hash + archive that moon c Furthermore, if we receive a cache hit on the hash, and the hash is the same as the last run, and outputs exist, we exit early without hydrating and assume the project is already hydrated. In the -terminal, you'll see a message for "cached from previous run". +terminal, you'll see a message for "cached". ## File structure diff --git a/website/docs/guides/webhooks.mdx b/website/docs/guides/webhooks.mdx index da05e5f3547..564f4c643c2 100644 --- a/website/docs/guides/webhooks.mdx +++ b/website/docs/guides/webhooks.mdx @@ -461,155 +461,3 @@ set with the error message. "uuid": "..." } ``` - -### Targets - -Targets to run. Each event contains a `hash` field, that is a unique SHA-256 identifier for the the -target's hashed contents. - -### `target-output.cache-check` - - - -Triggered when the pipeline is checking for a cached archive of the currently running target. - -```json -{ - "type": "target-output.cache-check", - "createdAt": "...", - "environment": "...", - "event": { - "hash": "1f5205cdb0912e97190e08a6cf98e41804bf6824b0a325d315e8b488a12677b0", - "target": "app:build" - }, - "uuid": "..." -} -``` - -### `target-output.archiving` - - - -Triggered when the [outputs](../config/project#outputs) of a task are being cached into a tarball -archive. This archive will be stored within `.moon/cache/outputs` on the host machine. - -This event _does not_ trigger if the task has no `outputs`. - -```json -{ - "type": "target-output.archiving", - "createdAt": "...", - "environment": "...", - "event": { - "hash": "1f5205cdb0912e97190e08a6cf98e41804bf6824b0a325d315e8b488a12677b0", - "project": { - "id": "app" - // ... - }, - "target": "app:build", - "task": { - "id": "build" - // ... - } - }, - "uuid": "..." -} -``` - -### `target-output.archived` - - - -Triggered when the [outputs](../config/project#outputs) of a task have been archived and stored in -the `.moon/cache/outputs` directory. The `archivePath` field is an absolute path to this archive, -_but_ is unique to the host machine that the target ran on. - -This event _does not_ trigger if [`target-output.archiving`](#target-outputarchiving) did not run or -failed to run. - -```json -{ - "type": "target-output.archived", - "createdAt": "...", - "environment": "...", - "event": { - "archivePath": "...", - "hash": "1f5205cdb0912e97190e08a6cf98e41804bf6824b0a325d315e8b488a12677b0", - "project": { - "id": "app" - // ... - }, - "target": "app:build", - "task": { - "id": "build" - // ... - } - }, - "uuid": "..." -} -``` - -### `target-output.hydrating` - - - -Triggered when a target has a cache hit for the generated hash (a cached archive exists on the local -file system) and the archive is being unpacked into the project directory at the defined -[outputs](../config/project#outputs). - -This event _does not_ trigger if the task has no `outputs`, or there was a cache miss (no matching -archive). - -```json -{ - "type": "target-output.hydrating", - "createdAt": "...", - "environment": "...", - "event": { - "hash": "1f5205cdb0912e97190e08a6cf98e41804bf6824b0a325d315e8b488a12677b0", - "project": { - "id": "app" - // ... - }, - "target": "app:build", - "task": { - "id": "build" - // ... - } - }, - "uuid": "..." -} -``` - -### `target-output.hydrated` - - - -Triggered when a target has hydrated a project with the contents of a cached archive. The -`archivePath` field is an absolute path to this archive, _but_ is unique to the host machine that -the target ran on. - -This event _does not_ trigger if [`target-output.hydrating`](#target-outputhydrating) did not run or -failed to run. - -```json -{ - "type": "target-output.hydrated", - "createdAt": "...", - "environment": "...", - "event": { - "archivePath": "...", - "hash": "1f5205cdb0912e97190e08a6cf98e41804bf6824b0a325d315e8b488a12677b0", - "project": { - "id": "app" - // ... - }, - "target": "app:build", - "task": { - "id": "build" - // ... - } - }, - "uuid": "..." -} -```