Skip to content
Merged
2 changes: 1 addition & 1 deletion src/cargo/core/compiler/fingerprint/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -454,7 +454,7 @@ pub fn prepare_target(
if !dirty_reason.is_fresh_build() {
logger.log(LogMessage::Rebuild {
package_id: unit.pkg.package_id().to_spec(),
target: unit.target.clone(),
target: (&unit.target).into(),
mode: unit.mode,
cause: dirty_reason.clone(),
});
Expand Down
13 changes: 8 additions & 5 deletions src/cargo/core/compiler/job_queue/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -606,7 +606,8 @@ impl<'gctx> DrainState<'gctx> {
.gctx
.shell()
.verbose(|c| c.status("Running", &cmd))?;
self.timings.unit_start(id, self.active[&id].clone());
self.timings
.unit_start(build_runner, id, self.active[&id].clone());
}
Message::Stdout(out) => {
writeln!(build_runner.bcx.gctx.shell().out(), "{}", out)?;
Expand Down Expand Up @@ -712,7 +713,7 @@ impl<'gctx> DrainState<'gctx> {
self.tokens.push(token);
}
Message::SectionTiming(id, section) => {
self.timings.unit_section_timing(id, &section);
self.timings.unit_section_timing(build_runner, id, &section);
}
}

Expand Down Expand Up @@ -1122,10 +1123,12 @@ impl<'gctx> DrainState<'gctx> {
unit.show_warnings(build_runner.bcx.gctx),
)?;
}
let unlocked = self.queue.finish(unit, &artifact);
let unblocked = self.queue.finish(unit, &artifact);
match artifact {
Artifact::All => self.timings.unit_finished(build_runner, id, unlocked),
Artifact::Metadata => self.timings.unit_rmeta_finished(id, unlocked),
Artifact::All => self.timings.unit_finished(build_runner, id, unblocked),
Artifact::Metadata => self
.timings
.unit_rmeta_finished(build_runner, id, unblocked),
}
Ok(())
}
Expand Down
3 changes: 2 additions & 1 deletion src/cargo/core/compiler/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1108,7 +1108,8 @@ fn add_allow_features(build_runner: &BuildRunner<'_, '_>, cmd: &mut ProcessBuild
/// [`--error-format`]: https://doc.rust-lang.org/nightly/rustc/command-line-arguments.html#--error-format-control-how-errors-are-produced
fn add_error_format_and_color(build_runner: &BuildRunner<'_, '_>, cmd: &mut ProcessBuilder) {
let enable_timings = build_runner.bcx.gctx.cli_unstable().section_timings
&& !build_runner.bcx.build_config.timing_outputs.is_empty();
&& (!build_runner.bcx.build_config.timing_outputs.is_empty()
|| build_runner.bcx.logger.is_some());
if enable_timings {
cmd.arg("-Zunstable-options");
}
Expand Down
163 changes: 120 additions & 43 deletions src/cargo/core/compiler/timings/mod.rs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before I dig into this too much, do we need special messages for these or are there existing json messages from cargo we could use if we also log those? If not, are there existing rustc messages that we can translate into cargo messages to handle this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What Cargo --message-format=json emits today:

  • build-finished — for the entire build, not at per unit level
  • build-script-exectued — emitted after build script execution
  • compiler-artifact — emitted when a compiler artifact is produced for each rustc invocation.
  • compiler-message — this is basically rustc diagonstics.
  • timing-info — the unstable aggregated timing info. It is the same as the old timing-info log message before this PR.

None of them has any elapsed time information

What rustc has today:

  • diagnostic — just diagnostics
  • artifact — Notice when an artifact is written to disk. Contains file name and emit-type.
  • future_incompat — for future incompat warnings
  • section_timing — unstable section timing. Looks like
    {
        "event": "start", /* Marks the "start" or "end" of the compilation section */
        "name": "link",  /* The name of the compilation section */
        // Opaque timestamp when the message was emitted, in microseconds
        // The timestamp is currently relative to the beginning of the compilation session
        "time": 12345
    }

I don't feel like there is anything we can directly use. timing-info Cargo message is half-broken. compiler-artifact has too much of information timing tracking doesn't need (but perhaps we'd like to expand to that in the future). We might not want to rely on section_timing's time field, as it is unclear what "beginning of the compilation" refers to.

The current approach gives us a room to iterate on our own at least.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we view this as a hack and document that we need to consolidate our messages, I'm fine moving forward with this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it is not the final version. I'd love to learn your thoughts on long-term consolidating our JSON messages and logging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to flesh out our json messages more

From #7614 (comment)

I think we should work to improve our json messages. The build analysis work is one area that will help shine a light on that. Another is the new cargo-fix architecture which would ideally have cargo check like progress while being a wrapper around cargo check, so we'll need json messages to support all of the progress reporting needs

If we log all json messages, we'll have redundancy with this.

From #t-cargo > build analysis log format @ 💬

would love for us to record all --message-format json messages

I also wonder if we should go with a full inverse, all logged messages are also json messages. Of course, we'll likely need to keep them distinct while the feature is unstable and we work through what we want the schema to be.

Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ use crate::util::{CargoResult, GlobalContext};

use cargo_util::paths;
use indexmap::IndexMap;
use itertools::Itertools as _;
use std::collections::HashMap;
use std::io::BufWriter;
use std::time::{Duration, Instant};
Expand Down Expand Up @@ -51,6 +52,11 @@ pub struct Timings<'gctx> {
total_fresh: u32,
/// Total number of dirty units.
total_dirty: u32,
/// A map from unit to index.
///
/// This for saving log size.
/// Only the unit-started event needs to hold the entire unit information.
unit_to_index: HashMap<Unit, u64>,
/// Time tracking for each individual unit.
unit_times: Vec<UnitTime>,
/// Units that are in the process of being built.
Expand Down Expand Up @@ -89,10 +95,10 @@ struct UnitTime {
/// The time when the `.rmeta` file was generated, an offset in seconds
/// from `start`.
rmeta_time: Option<f64>,
/// Reverse deps that are freed to run after this unit finished.
unlocked_units: Vec<Unit>,
/// Same as `unlocked_units`, but unlocked by rmeta.
unlocked_rmeta_units: Vec<Unit>,
/// Reverse deps that are unblocked and ready to run after this unit finishes.
unblocked_units: Vec<Unit>,
/// Same as `unblocked_units`, but unblocked by rmeta.
unblocked_rmeta_units: Vec<Unit>,
/// Individual compilation section durations, gathered from `--json=timings`.
///
/// IndexMap is used to keep original insertion order, we want to be able to tell which
Expand Down Expand Up @@ -127,18 +133,41 @@ struct UnitData {
start: f64,
duration: f64,
rmeta_time: Option<f64>,
unlocked_units: Vec<usize>,
unlocked_rmeta_units: Vec<usize>,
unblocked_units: Vec<usize>,
unblocked_rmeta_units: Vec<usize>,
sections: Option<Vec<(String, report::SectionData)>>,
}

impl<'gctx> Timings<'gctx> {
pub fn new(bcx: &BuildContext<'_, 'gctx>, root_units: &[Unit]) -> Timings<'gctx> {
let start = bcx.gctx.creation_time();
let has_report = |what| bcx.build_config.timing_outputs.contains(&what);
let report_html = has_report(TimingOutput::Html);
let report_json = has_report(TimingOutput::Json);
let enabled = report_html | report_json | bcx.logger.is_some();

if !enabled {
return Timings {
gctx: bcx.gctx,
enabled,
report_html,
report_json,
start,
start_str: String::new(),
root_targets: Vec::new(),
profile: String::new(),
total_fresh: 0,
total_dirty: 0,
unit_to_index: HashMap::new(),
unit_times: Vec::new(),
active: HashMap::new(),
concurrency: Vec::new(),
last_cpu_state: None,
last_cpu_recording: Instant::now(),
cpu_usage: Vec::new(),
};
}

let mut root_map: HashMap<PackageId, Vec<String>> = HashMap::new();
for unit in root_units {
let target_desc = unit.target.description_named();
Expand All @@ -156,29 +185,33 @@ impl<'gctx> Timings<'gctx> {
.collect();
let start_str = jiff::Timestamp::now().to_string();
let profile = bcx.build_config.requested_profile.to_string();
let last_cpu_state = if enabled {
match State::current() {
Ok(state) => Some(state),
Err(e) => {
tracing::info!("failed to get CPU state, CPU tracking disabled: {:?}", e);
None
}
let last_cpu_state = match State::current() {
Ok(state) => Some(state),
Err(e) => {
tracing::info!("failed to get CPU state, CPU tracking disabled: {:?}", e);
None
}
} else {
None
};
let unit_to_index = bcx
.unit_graph
.keys()
.sorted()
.enumerate()
.map(|(i, unit)| (unit.clone(), i as u64))
.collect();

Timings {
gctx: bcx.gctx,
enabled,
report_html,
report_json,
start: bcx.gctx.creation_time(),
start,
start_str,
root_targets,
profile,
total_fresh: 0,
total_dirty: 0,
unit_to_index,
unit_times: Vec::new(),
active: HashMap::new(),
concurrency: Vec::new(),
Expand All @@ -189,7 +222,7 @@ impl<'gctx> Timings<'gctx> {
}

/// Mark that a unit has started running.
pub fn unit_start(&mut self, id: JobId, unit: Unit) {
pub fn unit_start(&mut self, build_runner: &BuildRunner<'_, '_>, id: JobId, unit: Unit) {
if !self.enabled {
return;
}
Expand All @@ -210,21 +243,36 @@ impl<'gctx> Timings<'gctx> {
CompileMode::Docscrape => target.push_str(" (doc scrape)"),
CompileMode::RunCustomBuild => target.push_str(" (run)"),
}
let start = self.start.elapsed().as_secs_f64();
let unit_time = UnitTime {
unit,
target,
start: self.start.elapsed().as_secs_f64(),
start,
duration: 0.0,
rmeta_time: None,
unlocked_units: Vec::new(),
unlocked_rmeta_units: Vec::new(),
unblocked_units: Vec::new(),
unblocked_rmeta_units: Vec::new(),
sections: Default::default(),
};
if let Some(logger) = build_runner.bcx.logger {
logger.log(LogMessage::UnitStarted {
package_id: unit_time.unit.pkg.package_id().to_spec(),
target: (&unit_time.unit.target).into(),
mode: unit_time.unit.mode,
index: self.unit_to_index[&unit_time.unit],
elapsed: start,
});
}
assert!(self.active.insert(id, unit_time).is_none());
}

/// Mark that the `.rmeta` file as generated.
pub fn unit_rmeta_finished(&mut self, id: JobId, unlocked: Vec<&Unit>) {
pub fn unit_rmeta_finished(
&mut self,
build_runner: &BuildRunner<'_, '_>,
id: JobId,
unblocked: Vec<&Unit>,
) {
if !self.enabled {
return;
}
Expand All @@ -234,20 +282,29 @@ impl<'gctx> Timings<'gctx> {
let Some(unit_time) = self.active.get_mut(&id) else {
return;
};
let t = self.start.elapsed().as_secs_f64();
unit_time.rmeta_time = Some(t - unit_time.start);
assert!(unit_time.unlocked_rmeta_units.is_empty());
let elapsed = self.start.elapsed().as_secs_f64();
unit_time.rmeta_time = Some(elapsed - unit_time.start);
assert!(unit_time.unblocked_rmeta_units.is_empty());
unit_time
.unlocked_rmeta_units
.extend(unlocked.iter().cloned().cloned());
.unblocked_rmeta_units
.extend(unblocked.iter().cloned().cloned());

if let Some(logger) = build_runner.bcx.logger {
let unblocked = unblocked.iter().map(|u| self.unit_to_index[u]).collect();
logger.log(LogMessage::UnitRmetaFinished {
index: self.unit_to_index[&unit_time.unit],
elapsed,
unblocked,
});
}
}

/// Mark that a unit has finished running.
pub fn unit_finished(
&mut self,
build_runner: &BuildRunner<'_, '_>,
id: JobId,
unlocked: Vec<&Unit>,
unblocked: Vec<&Unit>,
) {
if !self.enabled {
return;
Expand All @@ -256,12 +313,12 @@ impl<'gctx> Timings<'gctx> {
let Some(mut unit_time) = self.active.remove(&id) else {
return;
};
let t = self.start.elapsed().as_secs_f64();
unit_time.duration = t - unit_time.start;
assert!(unit_time.unlocked_units.is_empty());
let elapsed = self.start.elapsed().as_secs_f64();
unit_time.duration = elapsed - unit_time.start;
assert!(unit_time.unblocked_units.is_empty());
unit_time
.unlocked_units
.extend(unlocked.iter().cloned().cloned());
.unblocked_units
.extend(unblocked.iter().cloned().cloned());
if self.report_json {
let msg = machine_message::TimingInfo {
package_id: unit_time.unit.pkg.package_id().to_spec(),
Expand All @@ -275,36 +332,56 @@ impl<'gctx> Timings<'gctx> {
crate::drop_println!(self.gctx, "{}", msg);
}
if let Some(logger) = build_runner.bcx.logger {
logger.log(LogMessage::TimingInfo {
package_id: unit_time.unit.pkg.package_id().to_spec(),
target: unit_time.unit.target.clone(),
mode: unit_time.unit.mode,
duration: unit_time.duration,
rmeta_time: unit_time.rmeta_time,
sections: unit_time.sections.clone().into_iter().collect(),
let unblocked = unblocked.iter().map(|u| self.unit_to_index[u]).collect();
logger.log(LogMessage::UnitFinished {
index: self.unit_to_index[&unit_time.unit],
elapsed,
unblocked,
});
}
self.unit_times.push(unit_time);
}

/// Handle the start/end of a compilation section.
pub fn unit_section_timing(&mut self, id: JobId, section_timing: &SectionTiming) {
pub fn unit_section_timing(
&mut self,
build_runner: &BuildRunner<'_, '_>,
id: JobId,
section_timing: &SectionTiming,
) {
if !self.enabled {
return;
}
let Some(unit_time) = self.active.get_mut(&id) else {
return;
};
let now = self.start.elapsed().as_secs_f64();
let elapsed = self.start.elapsed().as_secs_f64();

match section_timing.event {
SectionTimingEvent::Start => {
unit_time.start_section(&section_timing.name, now);
unit_time.start_section(&section_timing.name, elapsed);
}
SectionTimingEvent::End => {
unit_time.end_section(&section_timing.name, now);
unit_time.end_section(&section_timing.name, elapsed);
}
}

if let Some(logger) = build_runner.bcx.logger {
let index = self.unit_to_index[&unit_time.unit];
let section = section_timing.name.clone();
logger.log(match section_timing.event {
SectionTimingEvent::Start => LogMessage::UnitSectionStarted {
index,
elapsed,
section,
},
SectionTimingEvent::End => LogMessage::UnitSectionFinished {
index,
elapsed,
section,
},
})
}
}

/// This is called periodically to mark the concurrency of internal structures.
Expand Down
12 changes: 6 additions & 6 deletions src/cargo/core/compiler/timings/report.rs
Original file line number Diff line number Diff line change
Expand Up @@ -373,13 +373,13 @@ fn to_unit_data(unit_times: &[UnitTime]) -> Vec<UnitData> {
// These filter on the unlocked units because not all unlocked
// units are actually "built". For example, Doctest mode units
// don't actually generate artifacts.
let unlocked_units: Vec<usize> = ut
.unlocked_units
let unblocked_units: Vec<usize> = ut
.unblocked_units
.iter()
.filter_map(|unit| unit_map.get(unit).copied())
.collect();
let unlocked_rmeta_units: Vec<usize> = ut
.unlocked_rmeta_units
let unblocked_rmeta_units: Vec<usize> = ut
.unblocked_rmeta_units
.iter()
.filter_map(|unit| unit_map.get(unit).copied())
.collect();
Expand Down Expand Up @@ -419,8 +419,8 @@ fn to_unit_data(unit_times: &[UnitTime]) -> Vec<UnitData> {
start: round(ut.start),
duration: round(ut.duration),
rmeta_time: ut.rmeta_time.map(round),
unlocked_units,
unlocked_rmeta_units,
unblocked_units,
unblocked_rmeta_units,
sections,
}
})
Expand Down
Loading