Conversation
…recording modes - Add --json global flag for structured machine-readable output - Add --verbose flag for debug logging (logs to stderr) - Replace stdin-based recording stop with Ctrl+C signal handling - Add --duration flag for auto-stop after N seconds - Add --mode flag to choose between studio and instant recording - Add screenshot command (cap screenshot --screen <id>) - Add mics listing subcommand (cap record mics) - Enhance export command with --fps, --resolution, --compression flags - Add output.rs module for consistent JSON/human output formatting - Auto-generate timestamped output paths when --path not specified - All status messages go to stderr, results to stdout - Designed for AI agent consumption (no interactive prompts) Co-authored-by: Alex Reibman <areibman@users.noreply.github.com>
|
Cursor Agent can help with this pull request. Just |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 35ec106ca0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| #[serde(flatten)] | ||
| pub data: T, |
There was a problem hiding this comment.
Stop flattening slice payloads in JSON responses
print_list() feeds &[T] into CliOutput, but CliOutput declares that payload with #[serde(flatten)]. Serde only flattens maps/structs, so cap record screens --json/windows/cameras/mics will fail to serialize at runtime; because both call sites use unwrap_or_default(), the CLI prints a blank line and exits 0 instead of returning JSON. That breaks the new machine-readable discovery flow these subcommands were added for.
Useful? React with 👍 / 👎.
| match self.mode { | ||
| RecordingMode::Studio => self.run_studio(target_info, path, format).await, | ||
| RecordingMode::Instant => self.run_instant(target_info, path, format).await, |
There was a problem hiding this comment.
Gate instant mode until the CLI can export it
Selecting RecordingMode::Instant here creates an instant-recording project, but the only export path in this CLI still goes through ExporterBase::builder(...), which rejects any project whose recording_meta.studio_meta() is None (crates/export/src/lib.rs). In practice, cap record --mode instant ... now produces a .cap that cap export cannot turn into an MP4, so this new mode is a dead end for CLI users.
Useful? React with 👍 / 👎.
| match duration { | ||
| Some(secs) => tokio::time::sleep(tokio::time::Duration::from_secs_f64(secs)).await, | ||
| None => std::future::pending().await, |
There was a problem hiding this comment.
Reject invalid auto-stop durations before building the timer
tokio::time::Duration::from_secs_f64 panics on negative or NaN inputs, so values like --duration=-1 or --duration=NaN will crash the whole process instead of returning a normal CLI error. Since RecordStart::run has already allocated the output path before this point, the new auto-stop flag can leave behind a partial .cap directory on malformed input.
Useful? React with 👍 / 👎.
- Use cap_recording::DoneFut instead of private cap_recording::output_pipeline::DoneFut - Extract run_studio/run_instant to free functions to avoid partial move of self Co-authored-by: Alex Reibman <areibman@users.noreply.github.com>
The recording crate returns CompletedRecording with metadata but does not persist it to disk — the caller is responsible. The desktop app does this in handle_recording_finish(). The CLI was missing this step, causing 'Failed to load meta' errors when trying to export. Now both studio and instant recording modes create and save a RecordingMeta with platform info, pretty name, and the recording inner metadata. Co-authored-by: Alex Reibman <areibman@users.noreply.github.com>
The Cap rendering pipeline requires a TimelineConfiguration in the project-config.json to know which segments to render and their durations. Without it, get_segment_time() returns None for every frame, producing a 0-frame export. The desktop app builds this in project_config_from_recording(). The CLI now replicates this: after stopping a studio recording, it reads the segment durations via ProjectRecordingsMeta and creates TimelineSegments mapping each recording segment to its full duration at 1x timescale. Co-authored-by: Alex Reibman <areibman@users.noreply.github.com>
This pull request contains changes generated by a Cursor Cloud Agent