Add convert/export modules and subscribe CLI command#1102
Conversation
- Simplify fMP4 import to always use CMAF passthrough (remove Fmp4Config) - Add moq-mux convert module: Fmp4 (any→CMAF) and Hang (any→Legacy) - Add moq-mux export module: Fmp4 exporter with init()/frame() methods - Restructure CLI: server/client × publish/subscribe commands - Add subscribe command that reads from network and exports to stdout - Flat publish args with --input/--export instead of subcommands Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis pull request adds bidirectional media streaming capabilities to the MoQ CLI and supporting libraries. The changes introduce a new 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (1)
rs/moq-mux/src/import/test/mod.rs (1)
5-16: Cover the emitted fragments in the test helper.
run_fmp4()only returnscatalog.snapshot(), so these tests never assert the new passthroughmoof+mdatframes themselves. Adding one round-trip fragment assertion here would catch offset and per-track-fragment regressions before they ship.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rs/moq-mux/src/import/test/mod.rs` around lines 5 - 16, run_fmp4 currently only returns catalog.snapshot() so emitted passthrough fragments (moof+mdat) are never checked; after calling fmp4.decode(&mut buf) drain/collect the messages produced by the moq_lite Broadcast producer (the same broadcast you passed into super::Fmp4::new) and either (a) return them alongside the catalog snapshot or (b) perform a short assertion inside the helper that at least one round-trip fragment (moof+mdat) was emitted and has expected basic structure/offsets; update run_fmp4 (and tests that call it) to reference run_fmp4, super::Fmp4, CatalogProducer, broadcast, decode, and catalog.snapshot when locating where to add the collection/assertion so offset and per-track-fragment regressions are caught.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rs/moq-cli/src/publish.rs`:
- Around line 23-26: Publish::new currently ignores args.export so passing
--export is a no-op; either wire args.export through the publish pipeline to
call moq_mux::convert/export or add a fast-fail for non-identity exports. Update
Publish::new to read args.export (the ExportFormat value) and: if Some(format)
and format != native/identity, route the output through moq_mux::convert (or
moq_mux::export) before publishing; alternatively, if conversion isn't
implemented yet, return an error or exit early when args.export is set to a
non-identity value. Ensure you reference Publish::new, args.export, and
moq_mux::convert/export in your changes so the export flag is honored or
rejected explicitly.
In `@rs/moq-cli/src/subscribe.rs`:
- Around line 71-76: The subscribe flow currently writes a single-track init
(moq_mux::export::Fmp4::init) and then drains tracks sequentially (the loop that
processes tracks around the drain at lines 117-146), which breaks
multi-track/live streams; as a minimal fix, detect catalogs with more than one
track/rendition (inspect the catalog's track/rendition collection) at the start
of the subscribe handler (before creating moq_mux::export::Fmp4 via Fmp4::new
and calling init) and return an explicit error or reject the subscribe request
if count > 1; include a clear error message indicating multi-track muxing is
unsupported until concurrent multiplexing is implemented.
In `@rs/moq-mux/src/convert/fmp4.rs`:
- Line 49: The spawned tasks for passthrough_track and convert_legacy_to_cmaf
are fire-and-forget so their errors are lost; change each
tokio::spawn(passthrough_track(...)) and
tokio::spawn(convert_legacy_to_cmaf(...)) so you capture the JoinHandle (e.g.,
push it into a local Vec<JoinHandle<anyhow::Result<()>>>) and ensure the spawned
closure returns a Result by calling passthrough_track/convert_legacy_to_cmaf
which should be made to return Result if they don't already; after spawning all
tasks await them (use futures::future::try_join_all or iterate JoinHandle.await
and propagate any Err to the caller) so failures are surfaced instead of
dropped.
- Around line 390-398: The seq_tier_0 field in the Av1c construction inside the
VideoCodec::AV1(av1) match is incorrectly derived from av1.bitdepth; remove that
bitdepth-based assignment and either omit seq_tier_0 or set it from explicit
tier metadata if present on the av1 value (do not infer tier from bitdepth).
Update the mp4_atom::Av1c construction in the VideoCodec::AV1 arm so that
seq_tier_0 is not driven by av1.bitdepth (use an av1.tier field or leave
seq_tier_0 unset/false) to avoid encoding tier information incorrectly and the
downstream import mapping that converts (seq_tier_0, high_bitdepth) → 12-bit.
- Around line 169-177: The first pass currently constructs a moof via
build_moof(..., None) so trun is encoded without the data_offset field and
yields a smaller moof_size than the second pass; change the first build_moof
call to pass Some(0) for the data_offset so the first encoding includes the
data_offset field and produces the correct moof_size used to compute
data_offset, and make the identical change in the other occurrence where
build_moof is invoked (the matching call in export/fmp4.rs) so both passes
encode identical moof layouts.
In `@rs/moq-mux/src/convert/hang.rs`:
- Around line 56-57: Spawned tasks currently drop errors from fallible async
functions; wrap each tokio::spawn call that runs passthrough_track or
convert_cmaf_to_legacy in an async closure that awaits the fallible call and
surfaces any Err by logging or propagating it (for example: tokio::spawn(async
move { if let Err(e) = passthrough_track(...).await { tracing::error!(%e,
"passthrough_track failed for track id ..."); } })), and do the same pattern for
convert_cmaf_to_legacy at the other spawn sites so errors are not silently
dropped.
- Around line 224-257: The extract_from_moof_mdat loop resets offset per trun
and discards data_offset; change offset handling so it is declared/initialized
outside the for trun in &traf.trun loop (matching import/fmp4.rs) so multiple
trun boxes within a traf accumulate offsets correctly, use the trun.data_offset
value to compute the initial offset relative to mdat (instead of let _ =
data_offset), and replace the silent skip when a sample range is out-of-bounds
with a defensive anyhow::ensure!() that returns an error if offset + size >
mdat.data.len() (referencing extract_from_moof_mdat, traf.trun,
trun.data_offset, offset, mdat.data, and ensure!).
In `@rs/moq-mux/src/export/fmp4.rs`:
- Around line 24-58: new() currently builds self.tracks (Fmp4ExportTrack) and
renumbers tracks but init() still returns the raw CMAF init from the Catalog,
causing missing/misnumbered tracks in the moov; fix by making init() construct
its init segment from the same track table used by new()/frame() (self.tracks)
or else refuse multi-track exports: locate the logic that returns the init
segment for Container::Cmaf (uses parse_timescale_from_init/init_data and
Catalog entries), and either (A) merge/rewrite the init atoms to reflect
self.tracks (preserve track_id, timescale, and sequence_number from
Fmp4ExportTrack) so moov matches frame(), or (B) detect multiple tracks in new()
and return an error rejecting multi-track exports until proper init merging is
implemented.
- Around line 107-115: The two-pass moof build is causing an incorrect
data_offset because the first pass omits the trun data-offset field; change the
logic to stop encoding twice and instead call build_moof with Some(0) for
data_offset on the initial/only pass so the trun data-offset-present flag and
4-byte field are included when measuring size — update the call sites where moof
is constructed (the build_moof(...) invocation that currently uses None for
data_offset and the subsequent second-pass with Some(data_offset)) to pass
Some(0) and remove the second pass and manual data_offset calculation, keeping
use of seq, track.track_id, dts, payload.len(), and flags unchanged.
In `@rs/moq-mux/src/import/fmp4.rs`:
- Around line 555-575: The code currently writes the same self.moof_raw +
mdat_raw to every per-track output (using moof_raw, mdat_raw, create_frame,
frame.write, etc.), which duplicates multi-track fragments to single-track CMAF
outputs; instead, demultiplex the input fragment per traf: for each traf in the
parsed moof iterate and build a per-track moof/mdat pair that contains only that
traf (clone and/or rebuild the moof box with a single traf and adjust
offsets/sizes in its trun/aux info), then call
track.track.append_group()/create_frame() and write the rebuilt per-track moof
and its corresponding mdat slice (not the full multi-track mdat_raw) before
finish() and restoring track.group; use the existing track.group logic
(contains_keyframe, track.group.take(), prev.finish(), track.group = Some(g))
but replace use of self.moof_raw/mdat_raw with the per-traf moof and mdat you
construct.
---
Nitpick comments:
In `@rs/moq-mux/src/import/test/mod.rs`:
- Around line 5-16: run_fmp4 currently only returns catalog.snapshot() so
emitted passthrough fragments (moof+mdat) are never checked; after calling
fmp4.decode(&mut buf) drain/collect the messages produced by the moq_lite
Broadcast producer (the same broadcast you passed into super::Fmp4::new) and
either (a) return them alongside the catalog snapshot or (b) perform a short
assertion inside the helper that at least one round-trip fragment (moof+mdat)
was emitted and has expected basic structure/offsets; update run_fmp4 (and tests
that call it) to reference run_fmp4, super::Fmp4, CatalogProducer, broadcast,
decode, and catalog.snapshot when locating where to add the collection/assertion
so offset and per-track-fragment regressions are caught.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0c26a763-424b-497f-ae7f-50b014f6bbb9
📒 Files selected for processing (15)
rs/moq-cli/src/client.rsrs/moq-cli/src/main.rsrs/moq-cli/src/publish.rsrs/moq-cli/src/server.rsrs/moq-cli/src/subscribe.rsrs/moq-mux/src/convert/fmp4.rsrs/moq-mux/src/convert/hang.rsrs/moq-mux/src/convert/mod.rsrs/moq-mux/src/export/fmp4.rsrs/moq-mux/src/export/mod.rsrs/moq-mux/src/import/decoder.rsrs/moq-mux/src/import/fmp4.rsrs/moq-mux/src/import/hls.rsrs/moq-mux/src/import/test/mod.rsrs/moq-mux/src/lib.rs
rs/moq-mux/src/convert/fmp4.rs
Outdated
| name: name.clone(), | ||
| priority: 1, | ||
| })?; | ||
| tokio::spawn(passthrough_track(input_track, output_track)); |
There was a problem hiding this comment.
Surface failures from the spawned track tasks.
These tokio::spawn handles are dropped immediately, so any later error from passthrough_track or convert_legacy_to_cmaf disappears. A bad fragment or write failure will just stall that output track with no signal back to the caller.
Also applies to: 66-66, 84-84, 101-101
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rs/moq-mux/src/convert/fmp4.rs` at line 49, The spawned tasks for
passthrough_track and convert_legacy_to_cmaf are fire-and-forget so their errors
are lost; change each tokio::spawn(passthrough_track(...)) and
tokio::spawn(convert_legacy_to_cmaf(...)) so you capture the JoinHandle (e.g.,
push it into a local Vec<JoinHandle<anyhow::Result<()>>>) and ensure the spawned
closure returns a Result by calling passthrough_track/convert_legacy_to_cmaf
which should be made to return Result if they don't already; after spawning all
tasks await them (use futures::future::try_join_all or iterate JoinHandle.await
and propagate any Err to the caller) so failures are surfaced instead of
dropped.
rs/moq-mux/src/convert/fmp4.rs
Outdated
| // First pass to get moof size | ||
| let moof = build_moof(seq, track_id, dts, data.len() as u32, flags, None); | ||
| let mut buf = Vec::new(); | ||
| moof.encode(&mut buf)?; | ||
| let moof_size = buf.len(); | ||
|
|
||
| // Second pass with data_offset | ||
| let data_offset = (moof_size + 8) as i32; // 8 = mdat header | ||
| let moof = build_moof(seq, track_id, dts, data.len() as u32, flags, Some(data_offset)); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the file at the specified lines
head -200 rs/moq-mux/src/convert/fmp4.rs | tail -40Repository: moq-dev/moq
Length of output: 1222
🏁 Script executed:
# Let's also get the full context of the function
sed -n '160,190p' rs/moq-mux/src/convert/fmp4.rsRepository: moq-dev/moq
Length of output: 1007
🏁 Script executed:
# Now let's find and examine the build_moof function
rg -n "fn build_moof" rs/moq-mux/src/convert/fmp4.rs -A 50Repository: moq-dev/moq
Length of output: 2401
🏁 Script executed:
# Find the mp4_atom crate or module
fd -t f "mp4_atom" rs/ --type dRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Search for trun encoding or definition
rg -n "struct Trun" rs/ -A 20Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look for data_offset encoding logic
rg -n "data_offset" rs/moq-mux/src/convert/fmp4.rsRepository: moq-dev/moq
Length of output: 393
🏁 Script executed:
# Check if mp4_atom is an external crate
grep -r "mp4_atom" rs/moq-mux/Cargo.tomlRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Search for mp4_atom definition in the entire repo
find . -name "Cargo.toml" -exec grep -l "mp4_atom" {} \;Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look for mp4_atom as a local module
rg -n "mod mp4_atom" rs/Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check all Cargo.toml files for mp4_atom
find rs -name "Cargo.toml" | xargs grep -l "mp4_atom"Repository: moq-dev/moq
Length of output: 39
🏁 Script executed:
# Search for Trun struct definition anywhere
rg -n "struct Trun\b" rs/Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look at the Cargo.toml for moq-mux to see dependencies
cat rs/moq-mux/Cargo.tomlRepository: moq-dev/moq
Length of output: 1627
🏁 Script executed:
# Search for mp4_atom in all files
rg "mp4_atom" rs/ --type rs | head -30Repository: moq-dev/moq
Length of output: 81
🏁 Script executed:
# Check if there's a local mp4 module
find rs -type d -name "*mp4*"Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let's search for how Trun is used and encoded in this repo
rg "Trun\s*{" rs/ -A 10Repository: moq-dev/moq
Length of output: 134
🏁 Script executed:
# Check the mp4-atom crate documentation or find if there's any test
rg "data_offset" rs/ -B 3 -A 3Repository: moq-dev/moq
Length of output: 4962
🏁 Script executed:
# Let's look for any encoding logic or tests related to moof/trun
rg "encode" rs/moq-mux/src/convert/fmp4.rs -B 2 -A 2Repository: moq-dev/moq
Length of output: 1113
🏁 Script executed:
# Search for how the mp4-atom crate handles trun encoding
# Let's look at tests or examples that might show the encoding behavior
rg "Trun" rs/ -B 5 -A 10 | head -100Repository: moq-dev/moq
Length of output: 1785
🏁 Script executed:
# Let's check if there are any tests for the build_moof_mdat function
rg -n "build_moof_mdat" rs/ -B 3 -A 10Repository: moq-dev/moq
Length of output: 1840
🏁 Script executed:
# Let's look at the import code to understand how data_offset is used when reading
sed -n '50,100p' rs/moq-mux/src/import/fmp4.rsRepository: moq-dev/moq
Length of output: 1310
🏁 Script executed:
# Let's check if there's any test that shows the encoding behavior
rg -n "test" rs/moq-mux/src/convert/fmp4.rsRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let's look at how data_offset field is used in practice by checking import logic
sed -n '75,130p' rs/moq-mux/src/import/fmp4.rsRepository: moq-dev/moq
Length of output: 1538
🏁 Script executed:
# Let's search for any issues or comments about the mp4-atom encoding behavior
rg -i "optional\|field\|encode" rs/moq-mux/src/ | grep -i "trun\|data_offset\|moof" | head -20Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let's check the extract function to see how data_offset is actually used
sed -n '120,200p' rs/moq-mux/src/import/fmp4.rsRepository: moq-dev/moq
Length of output: 2093
🏁 Script executed:
# Let's search for documentation or comments about the encoding behavior
rg -n "data_offset" rs/moq-mux/src/import/fmp4.rs -B 2 -A 5Repository: moq-dev/moq
Length of output: 777
🏁 Script executed:
# Let's check if mp4-atom is documented in the repo
find . -name "*.md" -type f | xargs grep -l "mp4-atom" 2>/dev/nullRepository: moq-dev/moq
Length of output: 77
🏁 Script executed:
# Let's look for the specific section in import where data_offset is used
sed -n '140,180p' rs/moq-mux/src/import/fmp4.rsRepository: moq-dev/moq
Length of output: 1133
🏁 Script executed:
# Let's verify the mp4-atom crate behavior by checking their documentation
# Let's look at the Cargo.lock to see the exact version
grep -A 5 "mp4-atom" Cargo.lock 2>/dev/null | head -20Repository: moq-dev/moq
Length of output: 353
🏁 Script executed:
# Let's see if there are any other instances of this two-pass pattern in the codebase
rg -n "First pass" rs/ -B 2 -A 15Repository: moq-dev/moq
Length of output: 2426
🏁 Script executed:
# Let's check if there's similar pattern in export/fmp4.rs to see if it has the same issue
sed -n '235,260p' rs/moq-mux/src/export/fmp4.rsRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let's search for information about how optional fields are encoded in mp4-atom
# Looking at the issue more carefully - if data_offset is optional in encoding,
# we need to understand if None vs Some(0) changes the encoded size
rg -n "Option" rs/moq-mux/src/convert/fmp4.rs | grep -i "data_offset\|trun"Repository: moq-dev/moq
Length of output: 176
🏁 Script executed:
# Let's create a simple test to understand the encoding behavior
# Check if there are any tests related to moof/trun encoding
find rs -name "*.rs" -type f -exec grep -l "encode\|Trun\|data_offset" {} \; | xargs grep -l "test\|#\[test\]"Repository: moq-dev/moq
Length of output: 1027
🏁 Script executed:
# Let's search online documentation about mp4-atom or look at the actual behavior
# by checking if the issue is mentioned in any comments or issues
git log --all --oneline --grep="data_offset\|moof\|trun" 2>/dev/null | head -20Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let's check the RFC or MP4 spec comments in the code
rg -n "RFC\|spec\|ISO\|14496" rs/moq-mux/src/ -iRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let's look at the test file for import/fmp4 to understand the encoding
cat rs/moq-mux/src/import/test/mod.rsRepository: moq-dev/moq
Length of output: 4805
🏁 Script executed:
# Let's check if there's any validation of data_offset in the tests
rg -n "data_offset" rs/moq-mux/src/import/test/ -B 3 -A 3Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let's look at what the exact issue is by examining if the fix would work
# The key question is: does encoding with Some(0) vs None change the encoded size?
# Let's look for any encoding logic that would indicate conditional field encoding
rg -n "optional\|Option" rs/moq-mux/src/convert/fmp4.rs -B 1 -A 1 | head -50Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let me verify the fix logic more carefully by understanding MP4 spec
# Let's check if there are any comments or issues related to this pattern
git log --all --oneline -S "First pass" 2>/dev/null | head -10Repository: moq-dev/moq
Length of output: 141
🏁 Script executed:
# Let's search for any recent fixes or discussions about this
git log --all --grep="moof\|data_offset" --oneline 2>/dev/null | head -10Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Let's check if there's a similar fix pattern elsewhere
rg -n "Some\(0\)" rs/moq-mux/src/ -B 2 -A 2Repository: moq-dev/moq
Length of output: 37
Fix the moof size calculation to account for the data_offset field.
The first pass encodes trun without the data_offset field when None is used, resulting in a smaller moof than the second pass which includes it. Use Some(0) instead of None in the first pass so both passes encode the same size and the calculated offset is correct.
Recommended fix
- let moof = build_moof(seq, track_id, dts, data.len() as u32, flags, None);
+ let moof = build_moof(seq, track_id, dts, data.len() as u32, flags, Some(0));
let mut buf = Vec::new();
moof.encode(&mut buf)?;
let moof_size = buf.len();
let data_offset = (moof_size + 8) as i32; // 8 = mdat headerThis same issue exists in rs/moq-mux/src/export/fmp4.rs at lines 108 and 115.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // First pass to get moof size | |
| let moof = build_moof(seq, track_id, dts, data.len() as u32, flags, None); | |
| let mut buf = Vec::new(); | |
| moof.encode(&mut buf)?; | |
| let moof_size = buf.len(); | |
| // Second pass with data_offset | |
| let data_offset = (moof_size + 8) as i32; // 8 = mdat header | |
| let moof = build_moof(seq, track_id, dts, data.len() as u32, flags, Some(data_offset)); | |
| // First pass to get moof size | |
| let moof = build_moof(seq, track_id, dts, data.len() as u32, flags, Some(0)); | |
| let mut buf = Vec::new(); | |
| moof.encode(&mut buf)?; | |
| let moof_size = buf.len(); | |
| // Second pass with data_offset | |
| let data_offset = (moof_size + 8) as i32; // 8 = mdat header | |
| let moof = build_moof(seq, track_id, dts, data.len() as u32, flags, Some(data_offset)); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rs/moq-mux/src/convert/fmp4.rs` around lines 169 - 177, The first pass
currently constructs a moof via build_moof(..., None) so trun is encoded without
the data_offset field and yields a smaller moof_size than the second pass;
change the first build_moof call to pass Some(0) for the data_offset so the
first encoding includes the data_offset field and produces the correct moof_size
used to compute data_offset, and make the identical change in the other
occurrence where build_moof is invoked (the matching call in export/fmp4.rs) so
both passes encode identical moof layouts.
rs/moq-mux/src/export/fmp4.rs
Outdated
| // First pass to get moof size | ||
| let moof = build_moof(seq, track.track_id, dts, payload.len() as u32, flags, None); | ||
| let mut buf = Vec::new(); | ||
| moof.encode(&mut buf)?; | ||
| let moof_size = buf.len(); | ||
|
|
||
| // Second pass with data_offset | ||
| let data_offset = (moof_size + 8) as i32; | ||
| let moof = build_moof(seq, track.track_id, dts, payload.len() as u32, flags, Some(data_offset)); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file and the specific lines mentioned
cat -n rs/moq-mux/src/export/fmp4.rs | sed -n '100,125p'Repository: moq-dev/moq
Length of output: 931
🏁 Script executed:
# Let's also find the build_moof function to understand what it does
rg "fn build_moof" -A 20 rs/moq-mux/src/export/fmp4.rsRepository: moq-dev/moq
Length of output: 618
🏁 Script executed:
# Let's understand the complete context of lines 107-115
cat -n rs/moq-mux/src/export/fmp4.rs | sed -n '90,130p'Repository: moq-dev/moq
Length of output: 1709
🏁 Script executed:
# Find where Trun or encoding for mp4_atom is implemented
fd "mp4_atom" --type fRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Search for Trun struct and its encoding
rg "struct Trun" -A 10 rs/Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look for encode implementations related to Trun
rg "impl.*Encode.*Trun|fn encode" rs/ | head -30Repository: moq-dev/moq
Length of output: 3838
🏁 Script executed:
# Find where mp4_atom is imported or defined
rg "mp4_atom" rs/moq-mux/src/export/fmp4.rs | head -20Repository: moq-dev/moq
Length of output: 593
🏁 Script executed:
# Check Cargo.toml for mp4_atom dependency
rg "mp4_atom" Cargo.toml rs/moq-mux/Cargo.tomlRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Search for mp4_atom definition
fd "Cargo.toml" rs/moq-mux/ -exec cat {} \;Repository: moq-dev/moq
Length of output: 201
🏁 Script executed:
# Find and read the Cargo.toml for moq-mux
cat rs/moq-mux/Cargo.tomlRepository: moq-dev/moq
Length of output: 1627
🏁 Script executed:
# Search for mp4_atom crate definition
rg "mp4_atom" --type tomlRepository: moq-dev/moq
Length of output: 37
🌐 Web query:
mp4-atom crate Trun encoding data_offset
💡 Result:
In the mp4-atom crate, Trun { data_offset: Option<i32>, .. } is encoded like this:
- The
trun“data-offset-present” flag is set iffself.data_offset.is_some(). (docs.rs) - The field is then written via
Option<T>: Encode, which writes thei32only when it’sSome(and writes nothing forNone). (docs.rs)
So: the crate does not compute data_offset for you—you must supply the correct signed offset.
Per ISO/IEC 14496-12, trun.data_offset is added to the implicit/explicit data offset established by the tfhd in the same traf. (hezhaojiang.github.io)
Practical implication when muxing fMP4: compute data_offset as the byte distance (signed 32-bit) from the tfhd-established base (e.g., tfhd.base_data_offset if present; otherwise the implicit base defined by the spec) to the first byte of the run’s media data (typically inside the following mdat). (hezhaojiang.github.io)
Citations:
- 1: https://docs.rs/mp4-atom/latest/src/mp4_atom/moof/traf/trun.rs.html
- 2: https://docs.rs/mp4-atom/latest/src/mp4_atom/moof/traf/trun.rs.html
- 3: https://hezhaojiang.github.io/slave/MP4/ISO-IEC-14496-12-Base-Format-2015.pdf
- 4: https://hezhaojiang.github.io/slave/MP4/ISO-IEC-14496-12-Base-Format-2015.pdf
🌐 Web query:
MP4 trun box data_offset field optional encoding size
💡 Result:
In an MP4/ISO-BMFF Track Fragment Run Box (trun), the data_offset field is optional and is present only if the trun flags include 0x000001 (data-offset-present). If that flag is not set, the field is omitted. [1]
When present, data_offset is a signed 32-bit integer (so 4 bytes) and is relative to the base-data-offset established in the Track Fragment Header Box (tfhd). Semantically, it is added to the implicit/explicit offset from the track fragment header to locate the first media data byte for that run. [1]
If data_offset is not present, then the run’s data starts immediately after the previous run’s data, or at the tfhd base-data-offset if it’s the first run in that track fragment. [1]
Size impact (bytes on the wire):
trunup throughsample_countis 16 bytes (size+type 8, version+flags 4, sample_count 4).- If
data_offsetis present, add +4 bytes (so it starts at byte offset 16 from the start of thetrun). [2]
Sources:
[1] ISO/IEC 14496-12:2015, §8.8.8 Track Fragment Run Box (trun) (syntax, flags, and semantics). (b.goeswhere.com)
[2] Example trun layout/offset calculation showing data_offset is the optional next 4 bytes when flags & 0x000001. (dev.railbird.ai)
Citations:
- 1: https://b.goeswhere.com/ISO_IEC_14496-12_2015.pdf
- 2: https://dev.railbird.ai/railbird/react-native-vision-camera/compare/loewy/fragmented-mp4-android..loewy/fragmented-mp4-android-with-rotation-fix?utm_source=openai
Remove the two-pass approach and supply Some(0) for data_offset in the initial build, allowing the size to be computed correctly.
When data_offset is None, the trun box omits the 4-byte field and does not set the data-offset-present flag. The second pass with Some(data_offset) then includes these 4 bytes, making the first pass measurement incorrect. Computing the offset from a moof that is 4 bytes smaller than what you actually encode causes the offset to point into the moof rather than the mdat payload.
Suggested fix
- let moof = build_moof(seq, track.track_id, dts, payload.len() as u32, flags, None);
+ let moof = build_moof(seq, track.track_id, dts, payload.len() as u32, flags, Some(0));
let mut buf = Vec::new();
moof.encode(&mut buf)?;
let moof_size = buf.len();
let data_offset = (moof_size + 8) as i32;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rs/moq-mux/src/export/fmp4.rs` around lines 107 - 115, The two-pass moof
build is causing an incorrect data_offset because the first pass omits the trun
data-offset field; change the logic to stop encoding twice and instead call
build_moof with Some(0) for data_offset on the initial/only pass so the trun
data-offset-present flag and 4-byte field are included when measuring size —
update the call sites where moof is constructed (the build_moof(...) invocation
that currently uses None for data_offset and the subsequent second-pass with
Some(data_offset)) to pass Some(0) and remove the second pass and manual
data_offset calculation, keeping use of seq, track.track_id, dts, payload.len(),
and flags unchanged.
- Reject --export in publish (not yet wired) - Reject multi-track catalogs in subscribe and export::Fmp4::new - Log errors from spawned converter tasks instead of dropping them - Fix AV1 seq_tier_0: derive from av1.tier field, not bitdepth - Fix two-pass moof build: use Some(0) in first pass so data_offset field is included and moof_size is measured correctly - Fix convert/hang offset handling: declare offset outside trun loop, use ensure!() for bounds checking - Fix import/fmp4 multi-track passthrough: build per-traf moof+mdat instead of duplicating full fragment to every track Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
rs/moq-mux/src/export/fmp4.rs (2)
160-180: LGTM with note on code duplication.These helper functions work correctly.
guess_video_timescaleis also duplicated inconvert/fmp4.rs. Consider consolidating shared utilities.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rs/moq-mux/src/export/fmp4.rs` around lines 160 - 180, The helper guess_video_timescale is duplicated; extract it (and related parse_timescale_from_init if also reusable) into a shared util module (e.g., a new module utils/video_timescale or common/video) with a single exported function guess_video_timescale that accepts the same input (either &VideoConfig or Option<f64> to avoid cyclic deps), update callers in both fmp4.rs files to import and call that single function, remove the duplicate definitions, and ensure the function signature and visibility match all call sites and compile (adjust imports or convert to a small wrapper in callers if necessary).
136-158: Code duplication withconvert/fmp4.rs::build_moof.This function is nearly identical to the one in
rs/moq-mux/src/convert/fmp4.rs(lines 207-229). Consider extracting to a shared utility module to avoid drift between implementations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rs/moq-mux/src/export/fmp4.rs` around lines 136 - 158, The build_moof function here duplicates the logic in convert::build_moof; extract the common construction into a shared helper (e.g., a new fmp4::build_moof utility) and have both rs/moq-mux/src/export/fmp4.rs and rs/moq-mux/src/convert/fmp4.rs call that shared function; move the body that creates mp4_atom::Moof (including use of Mfhd, Traf, Tfhd, Tfdt, Trun and TrunEntry and Default usage) into the utility with the same signature (seq: u32, track_id: u32, dts: u64, size: u32, flags: u32, data_offset: Option<i32>) and replace the duplicated implementations with a single call to the new helper.rs/moq-mux/src/convert/fmp4.rs (1)
485-494: Hardcoded Opuspre_skipandoutput_gainmay cause audio desync or volume issues.The
pre_skipfield is typically 312 samples for Opus and affects decoder priming. Using0may cause the first few milliseconds of audio to be incorrect. Theoutput_gainfield affects playback volume. If the source Opus stream has non-zero values for these, they should ideally be preserved fromconfig.descriptionif available.Consider extracting from Opus header if available
If
config.descriptioncontains the Opus identification header, you could parse out the actualpre_skipandoutput_gainvalues:// Example approach if description contains Opus ID header let (pre_skip, output_gain) = if let Some(desc) = &config.description { // Opus ID header: pre_skip at bytes 10-11 (LE), output_gain at bytes 16-17 (LE) if desc.len() >= 18 { let pre_skip = u16::from_le_bytes([desc[10], desc[11]]); let output_gain = i16::from_le_bytes([desc[16], desc[17]]); (pre_skip, output_gain) } else { (312, 0) // sensible defaults } } else { (312, 0) };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rs/moq-mux/src/convert/fmp4.rs` around lines 485 - 494, The Opus branch currently hardcodes mp4_atom::Dops.pre_skip and output_gain to 0; change the AudioCodec::Opus handling in the mapper so it attempts to extract pre_skip (u16 LE) and output_gain (i16 LE) from config.description (the Opus ID header bytes: pre_skip at bytes 10–11, output_gain at bytes 16–17) when present and of sufficient length, falling back to sensible defaults (pre_skip = 312, output_gain = 0) otherwise, and then populate mp4_atom::Dops { pre_skip, output_gain, output_channel_count: config.channel_count as u8, input_sample_rate: config.sample_rate } so the Opus decoder priming and gain are preserved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rs/moq-cli/src/publish.rs`:
- Around line 23-26: The --export flag's help text (the doc comment on the pub
export: Option<ExportFormat> field) claims conversion is supported but the
publish path immediately rejects any Some(_) value; fix by either updating the
doc comment to state conversion is not yet implemented or hide the flag until
wired up (add #[arg(long, hide = true)] to the export field). Also ensure the
publish-side check that rejects Some(_) is kept in sync with the field (adjust
or remove the rejection logic where the code examines export) so help text and
runtime behavior match.
In `@rs/moq-cli/src/subscribe.rs`:
- Around line 56-76: The max_latency flag is never applied when needs_convert is
false because the CMAF-passthrough branch returns the original
broadcast/catalog, making Container::Legacy arms unreachable; update the false
branch in subscribe.rs to enforce the same late-group skipping behavior as the
conversion path by wrapping or adapting self.broadcast so consumers honor
max_latency (i.e., ensure the broadcast returned goes through the same
OrderedConsumer/late-group-skip logic that leads to OrderedConsumer::new(track,
max_latency) being used), or if you choose the alternative, remove/hide the
max_latency flag from the API and related code paths; specifically modify the
branch that currently returns (self.broadcast, catalog) so that it returns a
broadcast wrapper or pipeline equivalent to the converter.run().await? ->
(broadcast.consume(), catalog) behavior that allows OrderedConsumer::new(track,
max_latency) to execute.
- Around line 107-111: The code re-subscribes to a track by name using
broadcast.subscribe_track (changing priority) even though subscribe_track looks
up by name only, so remove the redundant second subscription and reuse the
existing track handle obtained earlier (the local variable track from the
initial for (name, config) ... subscribe_track(...) call); in the consumers loop
(for (name, track, is_cmaf) in consumers) drop the extra subscribe_track call
and operate on the provided track (e.g., call track.next_group().await? directly
for CMAF passthrough), and remove any hardcoded priority changes since priority
isn't used for lookup.
In `@rs/moq-mux/src/import/fmp4.rs`:
- Around line 562-578: The track data slice uses
traf.tfhd.base_data_offset.unwrap_or_default() for track_data_start while
track_data_end uses the adjusted offset variable, causing wrong ranges when
trun.data_offset is present; fix by computing and saving the adjusted per-traf
data-start (apply the same adjustment logic you use for offset when
trun.data_offset exists—subtract moof_size and header_size as done earlier for
trun.data_offset) before the sample-iteration loop and then use that saved value
as track_data_start (instead of traf.tfhd.base_data_offset) when slicing
mdat.data for the per-track fragment (referencing traf.tfhd.base_data_offset,
trun.data_offset, offset, moof_size, header_size, and single_traf_moof).
---
Nitpick comments:
In `@rs/moq-mux/src/convert/fmp4.rs`:
- Around line 485-494: The Opus branch currently hardcodes
mp4_atom::Dops.pre_skip and output_gain to 0; change the AudioCodec::Opus
handling in the mapper so it attempts to extract pre_skip (u16 LE) and
output_gain (i16 LE) from config.description (the Opus ID header bytes: pre_skip
at bytes 10–11, output_gain at bytes 16–17) when present and of sufficient
length, falling back to sensible defaults (pre_skip = 312, output_gain = 0)
otherwise, and then populate mp4_atom::Dops { pre_skip, output_gain,
output_channel_count: config.channel_count as u8, input_sample_rate:
config.sample_rate } so the Opus decoder priming and gain are preserved.
In `@rs/moq-mux/src/export/fmp4.rs`:
- Around line 160-180: The helper guess_video_timescale is duplicated; extract
it (and related parse_timescale_from_init if also reusable) into a shared util
module (e.g., a new module utils/video_timescale or common/video) with a single
exported function guess_video_timescale that accepts the same input (either
&VideoConfig or Option<f64> to avoid cyclic deps), update callers in both
fmp4.rs files to import and call that single function, remove the duplicate
definitions, and ensure the function signature and visibility match all call
sites and compile (adjust imports or convert to a small wrapper in callers if
necessary).
- Around line 136-158: The build_moof function here duplicates the logic in
convert::build_moof; extract the common construction into a shared helper (e.g.,
a new fmp4::build_moof utility) and have both rs/moq-mux/src/export/fmp4.rs and
rs/moq-mux/src/convert/fmp4.rs call that shared function; move the body that
creates mp4_atom::Moof (including use of Mfhd, Traf, Tfhd, Tfdt, Trun and
TrunEntry and Default usage) into the utility with the same signature (seq: u32,
track_id: u32, dts: u64, size: u32, flags: u32, data_offset: Option<i32>) and
replace the duplicated implementations with a single call to the new helper.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f451644e-b4f5-4378-9f94-0132d62a48d2
📒 Files selected for processing (6)
rs/moq-cli/src/publish.rsrs/moq-cli/src/subscribe.rsrs/moq-mux/src/convert/fmp4.rsrs/moq-mux/src/convert/hang.rsrs/moq-mux/src/export/fmp4.rsrs/moq-mux/src/import/fmp4.rs
Added: Unit tests for convert module (WIP)Commit 939e952 adds test scaffolding for the
Also makes Known issueTests are flaky due to |
Add 5 tests covering the convert module's Fmp4 and Hang converters: - legacy_to_cmaf_video: verifies Legacy→CMAF conversion with correct init_data, timestamps, payloads, and keyframe flags - cmaf_to_legacy_video: verifies CMAF→Legacy conversion - roundtrip_legacy_cmaf_legacy: verifies full roundtrip preserves data - cmaf_passthrough: CMAF tracks pass through Fmp4 byte-identically - legacy_passthrough: Legacy tracks pass through Hang unchanged Uses a concurrent test pattern matching real usage: catalog is written first so the converter can start, then media data is written after the converter has subscribed to tracks. Make build_video_init, build_audio_init, and build_moof_mdat pub(crate) so tests can reuse them. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rs/moq-mux/src/convert/fmp4.rs`:
- Around line 499-504: The function guess_video_timescale currently trusts
config.framerate and will return 0 or invalid values for Some(0.0) or non-finite
numbers; update guess_video_timescale to validate the optional framerate by
checking that fps.is_finite() and fps > 0.0 before using (fps * 1000.0) as u64,
and otherwise return the fallback 90000; reference the VideoConfig.framerate
field and the guess_video_timescale function when making this change.
- Around line 434-459: In build_audio_codec, validate config.sample_rate before
casting to u16 and ensure it is one of the standard AAC frequencies (the table
used to compute freq_index); if sample_rate > u16::MAX or not in the standard
table, return an Err explaining unsupported sample rate instead of silently
truncating or using the reserved freq_index 0xF. Specifically, check in
build_audio_codec (and where mp4_atom::FixedPoint is created) that sample_rate
fits in u16 and that the freq_index lookup does not fall through to 0xF, and
return an error when the explicit 24-bit frequency form would be required (since
mp4-atom v0.10.0 does not support it).
In `@rs/moq-mux/src/convert/test.rs`:
- Around line 200-235: Add tokio::time::pause() at the very start of each async
test function (legacy_to_cmaf_video, cmaf_to_legacy_video,
roundtrip_legacy_cmaf_legacy, cmaf_passthrough, legacy_passthrough) so the
timeout-based helpers (e.g., read_cmaf_raw_frames and read_legacy_frames which
call tokio::time::timeout) run deterministically under the Tokio test scheduler;
insert the call as the first statement in each async test before creating
broadcasts/consumers and, if the test needs to progress simulated time, use
tokio::time::advance(...) appropriately later in the test.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 40c3d86c-5367-4eea-a5b9-13ca8fe3c3b0
📒 Files selected for processing (3)
rs/moq-mux/src/convert/fmp4.rsrs/moq-mux/src/convert/mod.rsrs/moq-mux/src/convert/test.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- rs/moq-mux/src/convert/mod.rs
| fn build_audio_codec(config: &AudioConfig) -> anyhow::Result<mp4_atom::Codec> { | ||
| let audio = mp4_atom::Audio { | ||
| data_reference_index: 1, | ||
| channel_count: config.channel_count as u16, | ||
| sample_size: 16, | ||
| sample_rate: mp4_atom::FixedPoint::new(config.sample_rate as u16, 0), | ||
| }; | ||
|
|
||
| match &config.codec { | ||
| AudioCodec::AAC(aac) => { | ||
| let freq_index: u8 = match config.sample_rate { | ||
| 96000 => 0, | ||
| 88200 => 1, | ||
| 64000 => 2, | ||
| 48000 => 3, | ||
| 44100 => 4, | ||
| 32000 => 5, | ||
| 24000 => 6, | ||
| 22050 => 7, | ||
| 16000 => 8, | ||
| 12000 => 9, | ||
| 11025 => 10, | ||
| 8000 => 11, | ||
| 7350 => 12, | ||
| _ => 0xF, | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For mp4-atom v0.10.0, how should mp4_atom::Audio.sample_rateand AACfreq_index=0xFbe encoded inesds? Specifically, can sample rates above 65535 (e.g. 96000) be represented with FixedPoint::new, and if not, what is the correct encoding strategy?
💡 Result:
-
mp4_atom::Audio.sample_rate(inmp4asample entry) isFixedPoint<u16>—i.e., it stores the integer and fractional halves as twou16s (FixedPoint { int: T, dec: T }). So the integer part is limited to 0..=65535, and96000cannot be represented withFixedPoint::newin this type. [1][2] -
AAC in
esds(DecoderSpecific, i.e., AudioSpecificConfig):freq_index = 0xFmeans “explicit sampling frequency follows” and requires writing the 24-bit samplingFrequency immediately after the index (per MPEG-4 Audio). [3]- But
mp4-atom’sDecoderSpecific::encodecurrently does not implement thefreq_index == 15(0xF) “explicit 24-bit sample rate” case (there’s even aTODOnoting this). So with v0.10.0 you effectively cannot correctly encodefreq_index=0xFusing the providedDecoderSpecificstruct. [4]
-
Correct encoding strategy for 96 kHz (96000 Hz):
- Do not use
freq_index=0xFfor 96000, because 96000 is already in the standard table:samplingFrequencyIndex = 0x0corresponds to 96000 Hz. Encodefreq_index = 0(and no explicit 24-bit frequency field). [5] - For the
Audio.sample_ratefield, you’re still constrained by 16-bit integer range;mp4-atomcan only emit what it models (the version-0 style 16.16 rate with au16integer). If you truly need a container-level rate > 65535, you’ll need to extend/fork the crate to support an alternate sample-entry form that can carry it (and separately implement thefreq_index=0xF+ 24-bit frequency emission inesdsif you need non-table rates). [1][4]
- Do not use
Sources: mp4-atom Audio.sample_rate: FixedPoint<u16> [1], FixedPoint layout (int: T, dec: T) and constructor [2], MPEG-4 Audio rule for samplingFrequencyIndex==0xF carrying a 24-bit frequency [3], mp4-atom DecoderSpecific decode comment / missing encode support for freq_index==15 [4], MPEG-4 Audio sampling-frequency index table (0x0 => 96000) [5].
Sample rate validation required to prevent silent truncation and malformed AAC metadata.
config.sample_rate as u16 can silently truncate sample rates above 65535. More critically, unsupported sample rates fall back to freq_index=0xF, but mp4-atom v0.10.0 does not implement the required explicit 24-bit frequency encoding for this case. Validate or reject sample rates that exceed the u16 limit or are not in the standard frequency table.
Suggested hardening
fn build_audio_codec(config: &AudioConfig) -> anyhow::Result<mp4_atom::Codec> {
+ let sample_rate_u16 = u16::try_from(config.sample_rate)
+ .context("unsupported audio sample rate for mp4a sample entry (>65535)")?;
+
let audio = mp4_atom::Audio {
data_reference_index: 1,
channel_count: config.channel_count as u16,
sample_size: 16,
- sample_rate: mp4_atom::FixedPoint::new(config.sample_rate as u16, 0),
+ sample_rate: mp4_atom::FixedPoint::new(sample_rate_u16, 0),
};
match &config.codec {
AudioCodec::AAC(aac) => {
let freq_index: u8 = match config.sample_rate {
96000 => 0,
88200 => 1,
64000 => 2,
48000 => 3,
44100 => 4,
32000 => 5,
24000 => 6,
22050 => 7,
16000 => 8,
12000 => 9,
11025 => 10,
8000 => 11,
7350 => 12,
- _ => 0xF,
+ _ => anyhow::bail!("unsupported AAC sample rate: {}", config.sample_rate),
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rs/moq-mux/src/convert/fmp4.rs` around lines 434 - 459, In build_audio_codec,
validate config.sample_rate before casting to u16 and ensure it is one of the
standard AAC frequencies (the table used to compute freq_index); if sample_rate
> u16::MAX or not in the standard table, return an Err explaining unsupported
sample rate instead of silently truncating or using the reserved freq_index 0xF.
Specifically, check in build_audio_codec (and where mp4_atom::FixedPoint is
created) that sample_rate fits in u16 and that the freq_index lookup does not
fall through to 0xF, and return an error when the explicit 24-bit frequency form
would be required (since mp4-atom v0.10.0 does not support it).
| fn guess_video_timescale(config: &VideoConfig) -> u64 { | ||
| if let Some(fps) = config.framerate { | ||
| (fps * 1000.0) as u64 | ||
| } else { | ||
| 90000 | ||
| } |
There was a problem hiding this comment.
Guard invalid framerate before deriving timescale.
Some(0.0)/non-finite framerate yields a 0 or nonsensical timescale. Add a validity check and fall back to 90000.
Suggested fix
fn guess_video_timescale(config: &VideoConfig) -> u64 {
- if let Some(fps) = config.framerate {
- (fps * 1000.0) as u64
+ if let Some(fps) = config.framerate.filter(|f| f.is_finite() && *f > 0.0) {
+ (fps * 1000.0).round() as u64
} else {
90000
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rs/moq-mux/src/convert/fmp4.rs` around lines 499 - 504, The function
guess_video_timescale currently trusts config.framerate and will return 0 or
invalid values for Some(0.0) or non-finite numbers; update guess_video_timescale
to validate the optional framerate by checking that fps.is_finite() and fps >
0.0 before using (fps * 1000.0) as u64, and otherwise return the fallback 90000;
reference the VideoConfig.framerate field and the guess_video_timescale function
when making this change.
| while let Some(frame) = tokio::time::timeout(Duration::from_millis(500), ordered.read()) | ||
| .await | ||
| .expect("read_legacy_frames timed out") | ||
| .expect("read_legacy_frames error") | ||
| { | ||
| let is_keyframe = frame.is_keyframe(); | ||
| let timestamp = frame.timestamp; | ||
| let payload: Vec<u8> = frame.payload.into_iter().flat_map(|c| c.into_iter()).collect(); | ||
| result.push((timestamp, payload, is_keyframe)); | ||
| } | ||
| result | ||
| } | ||
|
|
||
| /// Read all raw CMAF frames (moof+mdat bytes) from a broadcast's video track. | ||
| async fn read_cmaf_raw_frames(broadcast: &moq_lite::BroadcastProducer) -> Vec<Bytes> { | ||
| let consumer = broadcast.consume(); | ||
| let mut track = consumer | ||
| .subscribe_track(&moq_lite::Track { | ||
| name: "video".to_string(), | ||
| priority: 1, | ||
| }) | ||
| .unwrap(); | ||
|
|
||
| let mut result = Vec::new(); | ||
| while let Some(group) = tokio::time::timeout(Duration::from_millis(500), track.next_group()) | ||
| .await | ||
| .expect("read_cmaf_raw_frames timed out") | ||
| .expect("read_cmaf_raw_frames error") | ||
| { | ||
| let mut reader = group; | ||
| while let Some(data) = | ||
| tokio::time::timeout(Duration::from_millis(500), reader.read_frame()) | ||
| .await | ||
| .expect("read_cmaf_raw_frames timed out on frame") | ||
| .expect("read_cmaf_raw_frames frame error") | ||
| { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify async tests with timeout usage and missing tokio::time::pause in this file.
rg -nP --type=rust '^\s*#\[tokio::test\]|tokio::time::pause\(|tokio::time::timeout\(' rs/moq-mux/src/convert/test.rsRepository: moq-dev/moq
Length of output: 418
🏁 Script executed:
#!/bin/bash
# Get all test functions and check for tokio::time::pause() usage
cat -n rs/moq-mux/src/convert/test.rs | sed -n '240,440p'Repository: moq-dev/moq
Length of output: 8311
🏁 Script executed:
#!/bin/bash
# Confirm no tokio::time::pause() exists anywhere in the file
rg -n 'tokio::time::pause' rs/moq-mux/src/convert/test.rsRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
#!/bin/bash
# Show the helper functions that contain timeout calls
sed -n '170,240p' rs/moq-mux/src/convert/test.rs | cat -nRepository: moq-dev/moq
Length of output: 2863
Add tokio::time::pause() at the start of all async tests to make timeout-based control flow deterministic.
All five async test functions (legacy_to_cmaf_video, cmaf_to_legacy_video, roundtrip_legacy_cmaf_legacy, cmaf_passthrough, legacy_passthrough) call helpers that use tokio::time::timeout(Duration::from_millis(500), ...). Without paused time, these timeouts are real-time dependent and create flakiness under scheduler variance.
Suggested pattern (apply to all async tests)
#[tokio::test]
async fn legacy_to_cmaf_video() {
+ tokio::time::pause();
let config = test_video_config();
...
}
#[tokio::test]
async fn cmaf_to_legacy_video() {
+ tokio::time::pause();
...
}
#[tokio::test]
async fn roundtrip_legacy_cmaf_legacy() {
+ tokio::time::pause();
...
}
#[tokio::test]
async fn cmaf_passthrough() {
+ tokio::time::pause();
...
}
#[tokio::test]
async fn legacy_passthrough() {
+ tokio::time::pause();
...
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rs/moq-mux/src/convert/test.rs` around lines 200 - 235, Add
tokio::time::pause() at the very start of each async test function
(legacy_to_cmaf_video, cmaf_to_legacy_video, roundtrip_legacy_cmaf_legacy,
cmaf_passthrough, legacy_passthrough) so the timeout-based helpers (e.g.,
read_cmaf_raw_frames and read_legacy_frames which call tokio::time::timeout) run
deterministically under the Tokio test scheduler; insert the call as the first
statement in each async test before creating broadcasts/consumers and, if the
test needs to progress simulated time, use tokio::time::advance(...)
appropriately later in the test.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rs/moq-mux/src/convert/fmp4.rs`:
- Around line 153-177: Replace the incorrect reference to
crate::consumer::OrderedConsumer and its wrong constructor usage: instantiate
hang::container::OrderedConsumer instead of crate::consumer::OrderedConsumer and
call its new() with only (input, std::time::Duration::MAX) matching the
signature (TrackConsumer, Duration); update the call site that currently passes
a third arg (crate::consumer::Legacy) to remove that argument; apply the same
change in the parallel test file where OrderedConsumer is constructed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 381f7895-5b15-4d3e-9f31-0dca9957d345
📒 Files selected for processing (3)
rs/moq-mux/src/convert/fmp4.rsrs/moq-mux/src/convert/mod.rsrs/moq-mux/src/convert/test.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- rs/moq-mux/src/convert/test.rs
| let mut consumer = crate::consumer::OrderedConsumer::new(input, crate::consumer::Legacy, std::time::Duration::MAX); | ||
| let mut seq: u32 = 1; | ||
| let mut current_group: Option<moq_lite::GroupProducer> = None; | ||
|
|
||
| while let Some(frame) = consumer.read().await? { | ||
| let keyframe = frame.is_keyframe(); | ||
|
|
||
| if is_video && keyframe { | ||
| if let Some(mut prev) = current_group.take() { | ||
| prev.finish()?; | ||
| } | ||
| current_group = Some(output.append_group()?); | ||
| } else if current_group.is_none() { | ||
| current_group = Some(output.append_group()?); | ||
| } | ||
|
|
||
| let group = current_group.as_mut().unwrap(); | ||
|
|
||
| let payload: Vec<u8> = frame.payload.into_iter().flat_map(|chunk| chunk.into_iter()).collect(); | ||
| let dts = frame.timestamp.as_micros() as u64 * timescale / 1_000_000; | ||
| let moof_mdat = build_moof_mdat(seq, 1, dts, &payload, keyframe)?; | ||
| seq += 1; | ||
|
|
||
| group.write_frame(moof_mdat)?; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if consumer module exists and is exported
echo "=== Checking lib.rs for consumer module ==="
rg -n "mod consumer|pub mod consumer" rs/moq-mux/src/lib.rs
echo -e "\n=== Searching for consumer module definition ==="
fd -t f "consumer" rs/moq-mux/src/
echo -e "\n=== Checking for OrderedConsumer definition ==="
rg -n "struct OrderedConsumer|pub struct OrderedConsumer" rs/moq-mux/src/Repository: moq-dev/moq
Length of output: 197
🏁 Script executed:
# Verify the file exists and check lib.rs structure
echo "=== Checking if fmp4.rs exists ==="
ls -la rs/moq-mux/src/convert/fmp4.rs 2>/dev/null || echo "File not found"
echo -e "\n=== Contents of lib.rs ==="
cat -n rs/moq-mux/src/lib.rs
echo -e "\n=== Directory structure of rs/moq-mux/src ==="
find rs/moq-mux/src -type f -name "*.rs" | head -20Repository: moq-dev/moq
Length of output: 1094
🏁 Script executed:
# Read the actual code at lines 153-177
echo "=== Code at lines 153-177 of fmp4.rs ==="
sed -n '153,177p' rs/moq-mux/src/convert/fmp4.rs
# Search for consumer module or OrderedConsumer across the entire codebase
echo -e "\n=== Searching for 'consumer' across codebase ==="
rg -r "consumer" --type rs | head -30
echo -e "\n=== Searching for 'OrderedConsumer' ==="
rg "OrderedConsumer" --type rsRepository: moq-dev/moq
Length of output: 1501
🏁 Script executed:
# Fix the rg search - search for "consumer" pattern without -r flag
echo "=== Searching for 'consumer' in .rs files ==="
rg "consumer" rs/ | head -50
echo -e "\n=== Searching for 'OrderedConsumer' ==="
rg "OrderedConsumer" rs/
echo -e "\n=== Checking import module in moq-mux ==="
rg "mod import|pub mod import" rs/moq-mux/src/ -A 2Repository: moq-dev/moq
Length of output: 10281
🏁 Script executed:
# Search for Legacy definition
echo "=== Searching for 'Legacy' constant/enum ==="
rg "Legacy" rs/ -B 2 -A 2 | head -80Repository: moq-dev/moq
Length of output: 3822
🏁 Script executed:
# Check the actual OrderedConsumer signature in hang crate
echo "=== OrderedConsumer::new signature ==="
rg -A 5 "pub fn new" rs/hang/src/container/consumer.rs | head -20
# Check if there's a consumer module definition anywhere that might be expected
echo -e "\n=== Check for any consumer module pattern in moq-mux ==="
grep -r "consumer" rs/moq-mux/src/convert/ --include="*.rs"Repository: moq-dev/moq
Length of output: 2056
Fix compilation error: incorrect module path and function signature for OrderedConsumer.
The code references crate::consumer::OrderedConsumer, which doesn't exist in the moq-mux crate. The correct type is hang::container::OrderedConsumer. Additionally, OrderedConsumer::new() takes two parameters—a TrackConsumer and a Duration—but the code incorrectly passes three arguments including a non-existent crate::consumer::Legacy.
Change line 153 from:
let mut consumer = crate::consumer::OrderedConsumer::new(input, crate::consumer::Legacy, std::time::Duration::MAX);
to:
let mut consumer = hang::container::OrderedConsumer::new(input, std::time::Duration::MAX);
This same error exists in rs/moq-mux/src/convert/test.rs and should be fixed there as well.
🧰 Tools
🪛 GitHub Actions: Check
[error] 153-153: failed to resolve: could not find consumer in the crate root
[error] 158-158: failed to resolve: could not find consumer in the crate root
[error] 158-158: E0282: type annotations needed for is_keyframe() return value
[error] 171-171: E0282: type annotations needed for closure parameter or return type in payload collection
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rs/moq-mux/src/convert/fmp4.rs` around lines 153 - 177, Replace the incorrect
reference to crate::consumer::OrderedConsumer and its wrong constructor usage:
instantiate hang::container::OrderedConsumer instead of
crate::consumer::OrderedConsumer and call its new() with only (input,
std::time::Duration::MAX) matching the signature (TrackConsumer, Duration);
update the call site that currently passes a third arg (crate::consumer::Legacy)
to remove that argument; apply the same change in the parallel test file where
OrderedConsumer is constructed.
- Rename moq-mux import/ → producer/, export/ → consumer/ for clarity - Add consumer::OrderedConsumer with timestamp reordering and latency management, moved from hang crate into moq-mux - Add consumer::ContainerFormat trait (Legacy, Cmaf) for format-aware frame parsing - Add consumer::Fmp4 muxer for multi-track fMP4 export - Move conducer dependency from hang to moq-mux - Update CLI subscribe command to use new consumer module - Add dev test media files (bbb/tos in mp4/fmp4 formats) - Update hang, libmoq, moq-ffi to use new module paths Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
rs/moq-mux/src/producer/fmp4.rs (1)
562-588:⚠️ Potential issue | 🔴 CriticalPer-track fragment rewriting still corrupts shifted or multi-run
trafs.Line 563 still takes
track_data_startfrom rawbase_data_offset, whiletrack_data_endcomes from the adjustedoffsetpath above. That already mis-slices any fragment that relies ontrun.data_offset. On top of that, Lines 582-585 assign the same rewrittendata_offsetto everytrun, which is only correct when atrafhas exactly one run. Valid multi-track or multi-trunfragments will emit brokenmdatranges here.Please capture the adjusted start offset before consuming samples, and recompute each rewritten
trun.data_offsetrelative to the new per-trackmdatinstead of forcing one shared value.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rs/moq-mux/src/producer/fmp4.rs` around lines 562 - 588, The current slicing and offset rewriting miscomputes per-traf ranges because track_data_start uses the original traf.tfhd.base_data_offset and you set a single data_offset for all truns; capture the actual start position for this traf before you consume samples (i.e., record the adjusted start offset computed from offset/path that accounts for trun.data_offset shifts), then slice track_mdat_data using that captured start and the computed track_data_end, and finally iterate adjusted_moof.traf[..].trun to recompute each trun.data_offset individually based on the cumulative byte position of that run within the new per-track mdat (use new_moof_size + mdat_header_size_new + per-run-relative-start) and clear traf.tfhd.base_data_offset only after per-trun offsets are set; update symbols: track_data_start, track_data_end, offset, traf.tfhd.base_data_offset, trun.data_offset, adjusted_moof, single_traf_moof, mdat.data.
🧹 Nitpick comments (3)
rs/moq-mux/src/consumer/mod.rs (1)
7-11: Consider explicit re-exports for clearer public API surface.Using
pub use X::*for all submodules can obscure which items are actually part of the public API and may cause conflicts if any two submodules export identically-named items. Explicit re-exports (e.g.,pub use container::ContainerFormat;) would make the API surface more intentional and easier to maintain.That said, this is a common pattern when the internal modules form a cohesive unit, so this is a stylistic preference rather than a defect.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rs/moq-mux/src/consumer/mod.rs` around lines 7 - 11, Replace the blanket re-exports (pub use container::*; pub use fmp4::*; pub use frame::*; pub use muxer::*; pub use ordered::*;) with explicit public exports of the specific types/functions you intend to expose (for example, pub use container::ContainerFormat; pub use fmp4::Fmp4Segment; pub use frame::Frame; pub use muxer::Muxer; pub use ordered::OrderedFrames;), by auditing each submodule (container, fmp4, frame, muxer, ordered) for the public items that should form the crate's public API and exporting only those symbols to avoid name collisions and make the surface intentional.rs/moq-mux/src/consumer/muxer.rs (1)
77-85: Consider more idiomatic min-finding pattern.The current loop manually tracks
min_idxandmin_tswith anunwrap()that's safe but less idiomatic. Rust's iterator methods could simplify this:♻️ Suggested refactor using iterator min_by_key
- // Find minimum timestamp across pending frames - let mut min_idx = None; - let mut min_ts = None; - - for (i, track) in self.tracks.iter().enumerate() { - if let Some(frame) = &track.pending { - let ts: std::time::Duration = frame.timestamp.into(); - if min_ts.is_none() || ts < min_ts.unwrap() { - min_ts = Some(ts); - min_idx = Some(i); - } - } - } + // Find minimum timestamp across pending frames + let min_idx = self + .tracks + .iter() + .enumerate() + .filter_map(|(i, track)| { + track.pending.as_ref().map(|frame| { + let ts: std::time::Duration = frame.timestamp.into(); + (i, ts) + }) + }) + .min_by_key(|(_, ts)| *ts) + .map(|(i, _)| i);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rs/moq-mux/src/consumer/muxer.rs` around lines 77 - 85, The loop that computes min_idx/min_ts by iterating self.tracks and unwrapping min_ts is non-idiomatic; replace it with an iterator-based min search: iterate self.tracks.iter().enumerate(), filter or map to (index, timestamp) only for tracks with track.pending, convert the timestamp via frame.timestamp.into(), and use min_by_key (or min_by with Duration) to find the smallest timestamp and its index, then set min_idx/min_ts from that result; update any uses of min_ts/min_idx accordingly (functions/variables: self.tracks, track.pending, frame.timestamp, min_ts, min_idx).rs/hang/src/container/consumer.rs (1)
1-2: Consider removing this empty module file.Since
rs/hang/src/container/mod.rsdoesn't declare or re-export theconsumermodule (it only exportsframe::*andproducer::*), this file is never compiled. The relocation comment is helpful for code archaeology, but the file could be deleted entirely—git history preserves the context. Alternatively, if kept, consider adding a compile-time error to prevent accidental direct imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rs/hang/src/container/consumer.rs` around lines 1 - 2, This empty consumer module (referencing OrderedConsumer and moq_mux::consumer) is unused and should be removed or made explicit: either delete the consumer module file entirely, or replace its contents with a compile-time guard such as compile_error!("consumer module moved to moq_mux::consumer; import from moq_mux::consumer instead") to prevent accidental direct imports; locate the consumer module file and apply one of these two changes so callers must use moq_mux::consumer (note container::mod does not re-export consumer).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@rs/moq-cli/src/subscribe.rs`:
- Around line 71-75: The init written by exporter.init() renumbers
tkhd/trex.track_id but the code writes source moof+mdat fragments unchanged
(stdout.write_all), causing mismatched track IDs; fix by choosing one: (A)
produce a non-renumbering init from moq_mux::consumer::Fmp4 (add an option or
new method that preserves original track IDs and use that instead of
exporter.init()), or (B) remap outgoing fragments to match the init’s IDs before
writing (after creating exporter via Fmp4::new, retrieve the init→fragment
track_id mapping and, for each source moof, parse and rewrite tfhd.track_id
values to the exporter’s assigned IDs prior to stdout.write_all). Reference
Fmp4::new, Fmp4::init, exporter.init(), stdout.write_all, and the tfhd/moof+mdat
boxes when implementing the mapping or adding the non-renumbering init path.
In `@rs/moq-mux/src/consumer/ordered.rs`:
- Around line 110-149: The skip logic currently uses min_timestamp from the
first ready newer group which yields zero span when only one newer group exists;
change it to compare the newest pending timestamp against the oldest reference
timestamp instead: obtain an oldest_timestamp by polling the appropriate
timestamp for the oldest relevant sequence (e.g. call poll_max_timestamp or
poll_min_timestamp on self.pending.front() or the group representing
self.current) and use that as the lower bound when computing
max_timestamp.saturating_sub(oldest_timestamp) >= self.max_latency; keep using
self.pending.drain(0..new_idx) and update self.current as before, and update
references to poll_min_timestamp, poll_max_timestamp, self.pending,
self.current, and self.max_latency; also add a two-group regression test that
covers the single-stalled-group and single-missing-sequence-near-EOF cases.
---
Duplicate comments:
In `@rs/moq-mux/src/producer/fmp4.rs`:
- Around line 562-588: The current slicing and offset rewriting miscomputes
per-traf ranges because track_data_start uses the original
traf.tfhd.base_data_offset and you set a single data_offset for all truns;
capture the actual start position for this traf before you consume samples
(i.e., record the adjusted start offset computed from offset/path that accounts
for trun.data_offset shifts), then slice track_mdat_data using that captured
start and the computed track_data_end, and finally iterate
adjusted_moof.traf[..].trun to recompute each trun.data_offset individually
based on the cumulative byte position of that run within the new per-track mdat
(use new_moof_size + mdat_header_size_new + per-run-relative-start) and clear
traf.tfhd.base_data_offset only after per-trun offsets are set; update symbols:
track_data_start, track_data_end, offset, traf.tfhd.base_data_offset,
trun.data_offset, adjusted_moof, single_traf_moof, mdat.data.
---
Nitpick comments:
In `@rs/hang/src/container/consumer.rs`:
- Around line 1-2: This empty consumer module (referencing OrderedConsumer and
moq_mux::consumer) is unused and should be removed or made explicit: either
delete the consumer module file entirely, or replace its contents with a
compile-time guard such as compile_error!("consumer module moved to
moq_mux::consumer; import from moq_mux::consumer instead") to prevent accidental
direct imports; locate the consumer module file and apply one of these two
changes so callers must use moq_mux::consumer (note container::mod does not
re-export consumer).
In `@rs/moq-mux/src/consumer/mod.rs`:
- Around line 7-11: Replace the blanket re-exports (pub use container::*; pub
use fmp4::*; pub use frame::*; pub use muxer::*; pub use ordered::*;) with
explicit public exports of the specific types/functions you intend to expose
(for example, pub use container::ContainerFormat; pub use fmp4::Fmp4Segment; pub
use frame::Frame; pub use muxer::Muxer; pub use ordered::OrderedFrames;), by
auditing each submodule (container, fmp4, frame, muxer, ordered) for the public
items that should form the crate's public API and exporting only those symbols
to avoid name collisions and make the surface intentional.
In `@rs/moq-mux/src/consumer/muxer.rs`:
- Around line 77-85: The loop that computes min_idx/min_ts by iterating
self.tracks and unwrapping min_ts is non-idiomatic; replace it with an
iterator-based min search: iterate self.tracks.iter().enumerate(), filter or map
to (index, timestamp) only for tracks with track.pending, convert the timestamp
via frame.timestamp.into(), and use min_by_key (or min_by with Duration) to find
the smallest timestamp and its index, then set min_idx/min_ts from that result;
update any uses of min_ts/min_idx accordingly (functions/variables: self.tracks,
track.pending, frame.timestamp, min_ts, min_idx).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 74a0ea50-a905-4f62-87c0-11eb0b46ced6
⛔ Files ignored due to path filters (4)
Cargo.lockis excluded by!**/*.lockrs/moq-mux/src/producer/test/av1.mp4is excluded by!**/*.mp4rs/moq-mux/src/producer/test/bbb.mp4is excluded by!**/*.mp4rs/moq-mux/src/producer/test/vp9.mp4is excluded by!**/*.mp4
📒 Files selected for processing (33)
rs/hang/Cargo.tomlrs/hang/examples/subscribe.rsrs/hang/src/container/consumer.rsrs/hang/src/container/mod.rsrs/hang/src/container/producer.rsrs/libmoq/src/consume.rsrs/libmoq/src/error.rsrs/libmoq/src/publish.rsrs/moq-cli/Cargo.tomlrs/moq-cli/src/publish.rsrs/moq-cli/src/subscribe.rsrs/moq-ffi/src/consumer.rsrs/moq-ffi/src/producer.rsrs/moq-mux/Cargo.tomlrs/moq-mux/src/consumer/container.rsrs/moq-mux/src/consumer/fmp4.rsrs/moq-mux/src/consumer/frame.rsrs/moq-mux/src/consumer/mod.rsrs/moq-mux/src/consumer/muxer.rsrs/moq-mux/src/consumer/ordered.rsrs/moq-mux/src/convert/test.rsrs/moq-mux/src/lib.rsrs/moq-mux/src/producer/aac.rsrs/moq-mux/src/producer/annexb.rsrs/moq-mux/src/producer/av01.rsrs/moq-mux/src/producer/avc3.rsrs/moq-mux/src/producer/decoder.rsrs/moq-mux/src/producer/fmp4.rsrs/moq-mux/src/producer/hev1.rsrs/moq-mux/src/producer/hls.rsrs/moq-mux/src/producer/mod.rsrs/moq-mux/src/producer/opus.rsrs/moq-mux/src/producer/test/mod.rs
💤 Files with no reviewable changes (1)
- rs/hang/src/container/mod.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- rs/moq-mux/src/convert/test.rs
- rs/moq-mux/src/lib.rs
rs/moq-cli/src/subscribe.rs
Outdated
| let exporter = moq_mux::consumer::Fmp4::new(&catalog)?; | ||
|
|
||
| // Write init segment (merged multi-track moov) | ||
| let init = exporter.init(&catalog)?; | ||
| stdout.write_all(&init).await?; |
There was a problem hiding this comment.
The init segment can advertise different track_ids than the fragments you stream.
moq_mux::consumer::Fmp4::init() renumbers tkhd/trex.track_ids to its own sequential IDs, but this path writes the source CMAF moof+mdat bytes unchanged. That only works by accident when the source tfhd.track_ids already match the exporter’s numbering. As soon as the input IDs or ordering differ, the merged init and emitted fragments disagree and the stdout MP4 becomes invalid.
Either build a non-renumbering init for passthrough output, or rewrite each outgoing fragment to the same IDs used in the merged init.
Also applies to: 120-125
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rs/moq-cli/src/subscribe.rs` around lines 71 - 75, The init written by
exporter.init() renumbers tkhd/trex.track_id but the code writes source
moof+mdat fragments unchanged (stdout.write_all), causing mismatched track IDs;
fix by choosing one: (A) produce a non-renumbering init from
moq_mux::consumer::Fmp4 (add an option or new method that preserves original
track IDs and use that instead of exporter.init()), or (B) remap outgoing
fragments to match the init’s IDs before writing (after creating exporter via
Fmp4::new, retrieve the init→fragment track_id mapping and, for each source
moof, parse and rewrite tfhd.track_id values to the exporter’s assigned IDs
prior to stdout.write_all). Reference Fmp4::new, Fmp4::init, exporter.init(),
stdout.write_all, and the tfhd/moof+mdat boxes when implementing the mapping or
adding the non-renumbering init path.
| let dts = frame.timestamp.as_micros() as u64 * track.timescale / 1_000_000; | ||
| let payload: Vec<u8> = frame.payload.clone().into_iter().flat_map(|c| c.into_iter()).collect(); | ||
| let keyframe = frame.is_keyframe(); | ||
|
|
||
| let flags = if keyframe { 0x0200_0000 } else { 0x0001_0000 }; | ||
|
|
||
| let seq = track.sequence_number; | ||
| track.sequence_number += 1; | ||
|
|
||
| // First pass to get moof size (use Some(0) so trun includes the data_offset field) | ||
| let moof = build_moof(seq, track.track_id, dts, payload.len() as u32, flags, Some(0)); | ||
| let mut buf = Vec::new(); | ||
| moof.encode(&mut buf)?; | ||
| let moof_size = buf.len(); | ||
|
|
||
| // Second pass with data_offset | ||
| let data_offset = (moof_size + 8) as i32; | ||
| let moof = build_moof(seq, track.track_id, dts, payload.len() as u32, flags, Some(data_offset)); |
There was a problem hiding this comment.
B-frames lose their composition timing here.
frame.timestamp is presentation time in the current pipeline (rs/moq-mux/src/producer/fmp4.rs derives it from dts + cts), but frame() writes it straight into tfdt.base_media_decode_time and never emits a composition offset in the trun. The ordered-consumer tests already cover non-monotonic timestamps within a group, so this path will serialize valid B-frame streams with incorrect MP4 timing.
Please carry DTS/CTS separately, or reject non-monotonic timestamps here until the exporter can encode composition offsets.
- consumer/fmp4: Preserve original track IDs in merged init segment instead of renumbering, so they match CMAF passthrough fragments - consumer/ordered: Compare newer group timestamps against the current group (not just across newer groups) so a single pending group can trigger a latency skip; also handle sequence gaps when track is done - producer/fmp4: Capture actual traf data start offset instead of using base_data_offset; compute per-trun data_offset based on cumulative sample sizes instead of a single offset for all truns - Delete orphan hang/src/container/consumer.rs (not referenced by mod.rs) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Rename producer::Decoder → Framed, StreamDecoder → Stream (with type aliases) - Converters take (BroadcastConsumer, BroadcastProducer) pair, run() returns () - Merge client.rs/server.rs into publish.rs/subscribe.rs - Add InputFormat::Hls (reads playlist URL from stdin) - Add --export flag for format conversion on publish - Subscribe always uses convert::Fmp4 (no-op for CMAF tracks) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Combine ServerAction/ClientAction into single Action enum - Move connection setup from Publish/Subscribe into main.rs - Extract run_client(), wait_broadcast(), run_publish_server(), run_accept() helpers - Fix subscribe pipeline: run converter concurrently with muxer via tokio::select! - Deprecate old type aliases (DecoderFormat, StreamDecoder, Decoder) in moq-mux - Add deprecated hang::container::OrderedConsumer stub for migration warnings - Update libmoq/moq-ffi to use new type names (Framed, FramedFormat) - Update doc example to use moq_mux::consumer::OrderedConsumer Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
Fmp4Config,passthroughflag, andFmp4Producer::Orderedvariant. fMP4 import now always producesContainer::Cmafwith moof+mdat frames.moq-mux::convertmodule:convert::Fmp4(any→CMAF) andconvert::Hang(any→Legacy) converters that operate broadcast-to-broadcast with no latency or HOL blocking.moq-mux::exportmodule:export::Fmp4exporter withinit()/frame()methods for encoding ordered frames as fMP4 fragments.server/client×publish/subscribeinstead of flatServe/Publish. Publish uses--input/--exportflat args.Usage
Test plan
just checkpassesffmpeg ... | moq-cli server publish --input fmp4 --name testpublishes as CMAFmoq-cli client subscribe --output fmp4 --url ... --name test > output.mp4subscribes and exports🤖 Generated with Claude Code