fix: streaming decode + chunked offline audio rendering for export#262
fix: streaming decode + chunked offline audio rendering for export#262webadderall merged 5 commits intomainfrom
Conversation
…ffline rendering Major rewrite of the audio export pipeline for stability and memory efficiency: ## Streaming Decode - Primary decode path uses WebDemuxer + AudioDecoder (WebCodecs streaming) - Falls back to bulk decodeAudioData if streaming fails - Avoids holding full compressed file + decoded PCM simultaneously ## Chunked Offline Rendering - Processes timeline in 30-second OfflineAudioContext chunks - Memory bounded to ~30s of PCM per chunk regardless of recording length ## Chunked WAV Writing - Writes PCM in ~256KB chunks instead of single massive ArrayBuffer - Eliminates OOM for long recordings on the native/FFmpeg export path ## Chunked Encoding - Single AudioEncoder kept alive across all chunks (clean AAC stream) - Proper backpressure and error propagation per chunk ## scheduleBufferThroughTimeline chunk windowing - Clips source nodes to chunk boundaries for correct cross-chunk audio - Backwards compatible defaults when no chunk params provided ## Other export fixes - Progress cap: 99% -> 100% in both exporters and UI - Muxing errors: propagated instead of silently swallowed - Mac recording: enhanced warning when no audio files for muxing - Export timeout: eliminated by removing real-time rendering
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
Warning
|
| Cohort / File(s) | Summary |
|---|---|
Audio offline renderer src/lib/exporter/audioEncoder.ts |
Replaces HTMLMediaElement+MediaRecorder real-time flow with a chunked OfflineAudioContext pipeline: prepares timeline slices, renders fixed-duration chunks, encodes PCM→AAC incrementally, muxes chunks via VideoMuxer, and returns assembled WAV blobs. Removes media-sync helpers and adds offline-specific helpers and decoding paths. |
Exporter finalization & muxer error propagation src/lib/exporter/modernVideoExporter.ts, src/lib/exporter/videoExporter.ts |
Allows "finalizing" progress to reach 100% (clamp/default 99→100). Captures muxer add-chunk errors into an encoder error, sets cancelled, and throws the stored error before audio processing/finalization to abort finalization. |
UI progress / editor message src/components/video-editor/VideoEditor.tsx |
Finalizing percent fallback/clamp changed to use 100 as default/upper bound; replaces hardcoded audio-processing message with localized t("editor.export.processingAudioEdits", ...). |
Recording logging electron/ipc/recording/mac.ts |
Improves warning when no valid audio inputs exist: logs that the video will be produced without audio and includes resolved system/microphone paths (or none) in the warning text. |
I18n src/i18n/locales/en/editor.json, src/i18n/locales/es/editor.json, src/i18n/locales/ko/editor.json, src/i18n/locales/nl/editor.json, src/i18n/locales/zh-CN/editor.json |
Adds export.processingAudioEdits translation key in multiple locales for the new localized editor message. |
Config change .coderabbit.yaml |
Changes base_branches selector from * to a regex-like ".*" pattern. |
Sequence Diagram(s)
sequenceDiagram
participant VideoExporter as VideoExporter
participant AudioProcessor as OfflineAudioProcessor
participant OfflineCtx as OfflineAudioContext
participant AudioEncoder as AudioEncoder
participant Muxer as VideoMuxer
VideoExporter->>AudioProcessor: start render (timeline, edits)
loop per chunk
AudioProcessor->>OfflineCtx: create context for chunk (sampleRate, duration)
AudioProcessor->>OfflineCtx: schedule sources, speed, overlays
OfflineCtx->>OfflineCtx: render -> PCM chunk
OfflineCtx-->>AudioProcessor: return PCM chunk
AudioProcessor->>AudioEncoder: encode PCM frames (with timestamp offset)
AudioEncoder-->>AudioProcessor: encoded AAC chunk
AudioProcessor->>Muxer: addAudioChunk(encoded AAC chunk)
Muxer-->>AudioProcessor: ack / throw error (captured as encoderError)
end
AudioProcessor-->>VideoExporter: final WAV Blob or throw encoderError
Estimated code review effort
🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
- webadderall/Recordly#259: Modifies
electron/ipc/recording/mac.tsmuxing behavior — overlaps with the no-audio warning changes. - webadderall/Recordly#122: Changes exporter/audio mixing and progress behavior — likely overlaps with
audioEncoder.tsand exporter progress changes. - webadderall/Recordly#254: Touches
electron/ipc/recording/mac.tsand audio muxing — related to updated mac recording logging/control flow.
Suggested labels
Checked
Poem
🐇 I hopped through chunks both wide and small,
Offline I mixed them, encoded each call.
Progress climbs true—one-hundred at last!
Muxer alarms caught, so finalization won't blast.
A crunchy WAV for your bright video cast.
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
| Check name | Status | Explanation | Resolution |
|---|---|---|---|
| Docstring Coverage | Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. | Write docstrings for the functions missing them to satisfy the coverage threshold. |
✅ Passed checks (2 passed)
| Check name | Status | Explanation |
|---|---|---|
| Title check | ✅ Passed | The title accurately summarizes the main technical change: replacing real-time audio export with a streaming decode + chunked offline rendering approach. |
| Description check | ✅ Passed | The description comprehensively documents the problem statement, detailed changes, and testing results, exceeding the repository template requirements with substantive context. |
✏️ Tip: You can configure your own custom pre-merge checks in the settings.
✨ Finishing Touches
📝 Generate docstrings
- Create stacked PR
- Commit on current branch
🧪 Generate unit tests (beta)
- Create PR with unit tests
- Commit unit tests in branch
fix/export-pipeline-streaming-chunked
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/lib/exporter/modernVideoExporter.ts (1)
431-435:⚠️ Potential issue | 🔴 CriticalPropagate muxing failures before finalizing.
Line 1400 catches
addVideoChunkfailures, but the promise chain still resolves, so Lines 431-435 can complete and Lines 465-469 may finalize/return a corrupt export despiteencoderErrorbeing set. Re-throw the normalized muxing error so the existing outer error path handles it.🐛 Proposed fix
} catch (error) { console.error("Muxing error:", error); - if (!this.encoderError) { - this.encoderError = error instanceof Error ? error : new Error(String(error)); - } + const muxingError = + error instanceof Error ? error : new Error(String(error)); + if (!this.encoderError) { + this.encoderError = muxingError; + } this.cancelled = true; + throw muxingError; } });Also applies to: 1400-1407, 465-469
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/exporter/modernVideoExporter.ts` around lines 431 - 435, The pending muxing promise chain can swallow addVideoChunk errors and let finalization continue; modify the code around reportFinalizingProgress/awaitWithFinalizationTimeout to re-throw a normalized muxing error when pendingMuxing rejects so the outer error path handles it: detect rejection of this.pendingMuxing (same logic used where addVideoChunk failures set encoderError), normalize/wrap the error and throw it rather than allowing execution to continue; apply the same change to the other spots mentioned (the addVideoChunk catch at addVideoChunk -> encoderError handling and the finalization block around lines 465-469) so any muxing failure surfaces instead of producing a corrupt export.
🧹 Nitpick comments (2)
src/components/video-editor/VideoEditor.tsx (1)
4455-4458: Localize the new export status copy.Line 4457 adds user-facing text outside
t(...), unlike the surrounding export status strings.🌐 Proposed fix
{isRenderingAudio ? ( <p className="mt-1 text-[11px] text-muted-foreground/70"> - Processing audio with speed/overlay edits + {t( + "editor.exportStatus.processingAudioEdits", + "Processing audio with speed/overlay edits", + )} </p>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/VideoEditor.tsx` around lines 4455 - 4458, The new user-facing string "Processing audio with speed/overlay edits" is not localized; update the JSX where isRenderingAudio is used in VideoEditor.tsx to wrap that text with the existing translation helper (t(...)) like the surrounding export status strings, add a new i18n key/entry for the message in the translation files, and use the same interpolation/format style used elsewhere in this component to keep consistency with other export status texts.src/lib/exporter/audioEncoder.ts (1)
1151-1162: Synchronous fast-path is effectively dead code;preloadis set aftersrc.After
media.src = source.src;the element'sreadyStateisHAVE_NOTHINGandmedia.durationisNaN, so theNumber.isFinite(media.duration) && readyState >= HAVE_METADATAcheck cannot succeed without first awaitingloadedmetadata. The branch is unreachable in practice and themedia.src = ""on that path never runs. Also,media.preload = "metadata"is assigned aftermedia.src, so it has no effect on the initial fetch (browser has already started loading before the hint is applied).Suggest dropping the synchronous branch and setting
preloadbeforesrc:♻️ Proposed tidy-up
const source = await resolveMediaElementSource(url); try { const media = document.createElement("video"); + media.preload = "metadata"; media.src = source.src; - media.preload = "metadata"; - - if ( - Number.isFinite(media.duration) && - media.readyState >= HTMLMediaElement.HAVE_METADATA - ) { - const duration = media.duration; - media.src = ""; - return duration; - } return await new Promise<number>((resolve, reject) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/exporter/audioEncoder.ts` around lines 1151 - 1162, The synchronous fast-path that checks Number.isFinite(media.duration) && media.readyState >= HTMLMediaElement.HAVE_METADATA is effectively unreachable because preload is set after assigning media.src; fix by removing that synchronous branch, set media.preload = "metadata" before assigning media.src (the element created in this snippet), and replace the removed branch with awaiting the "loadedmetadata" event to read media.duration and then clear media.src; update references to media and source.src accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 593-617: The current branching sets sourceDurationSec to
primaryBuffer?.duration ?? 0 when mainBuffer is null, causing
buildTimelineSlices(...) to get 0 and produce empty slices for silent videos
with audioRegions; change the fallback so that whenever mainBuffer is null you
call await this.getMediaDurationSec(videoUrl) (same approach used when
hasExternalSources) instead of falling through to primaryBuffer, ensuring
sourceDurationSec uses the container duration; update the block that sets
sourceDurationSec (references: mainBuffer, hasExternalSources,
getMediaDurationSec, primaryBuffer, videoUrl) so buildTimelineSlices and
sourceTimeToOutputTime can compute correct output times for
regionEntries/audioRegions.
- Around line 985-1003: The deinterleaving logic in the audioEncoder (inside the
branch handling interleaved formats) uses dataChannels when computing the number
of samples/stride for rawToFloat32 and when indexing interleaved samples, which
is wrong for sources with >2 channels; change the stride/length to use
data.numberOfChannels (from the MediaBuffer-like `data`) so rawToFloat32 is
called with frames * data.numberOfChannels and the per-sample indexing uses
data.numberOfChannels instead of dataChannels (update references around
allocationSize, rawToFloat32, the interleaved variable and the inner loop that
assigns chData into channelChunks).
---
Outside diff comments:
In `@src/lib/exporter/modernVideoExporter.ts`:
- Around line 431-435: The pending muxing promise chain can swallow
addVideoChunk errors and let finalization continue; modify the code around
reportFinalizingProgress/awaitWithFinalizationTimeout to re-throw a normalized
muxing error when pendingMuxing rejects so the outer error path handles it:
detect rejection of this.pendingMuxing (same logic used where addVideoChunk
failures set encoderError), normalize/wrap the error and throw it rather than
allowing execution to continue; apply the same change to the other spots
mentioned (the addVideoChunk catch at addVideoChunk -> encoderError handling and
the finalization block around lines 465-469) so any muxing failure surfaces
instead of producing a corrupt export.
---
Nitpick comments:
In `@src/components/video-editor/VideoEditor.tsx`:
- Around line 4455-4458: The new user-facing string "Processing audio with
speed/overlay edits" is not localized; update the JSX where isRenderingAudio is
used in VideoEditor.tsx to wrap that text with the existing translation helper
(t(...)) like the surrounding export status strings, add a new i18n key/entry
for the message in the translation files, and use the same interpolation/format
style used elsewhere in this component to keep consistency with other export
status texts.
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 1151-1162: The synchronous fast-path that checks
Number.isFinite(media.duration) && media.readyState >=
HTMLMediaElement.HAVE_METADATA is effectively unreachable because preload is set
after assigning media.src; fix by removing that synchronous branch, set
media.preload = "metadata" before assigning media.src (the element created in
this snippet), and replace the removed branch with awaiting the "loadedmetadata"
event to read media.duration and then clear media.src; update references to
media and source.src accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: ff606dab-89a7-4b7f-9c23-56f57995af9d
📒 Files selected for processing (5)
electron/ipc/recording/mac.tssrc/components/video-editor/VideoEditor.tsxsrc/lib/exporter/audioEncoder.tssrc/lib/exporter/modernVideoExporter.tssrc/lib/exporter/videoExporter.ts
…l, i18n - Add encoderError check after awaiting pendingMuxing to prevent finalization of corrupt exports (both exporters) - Remove unreachable synchronous fast-path in getMediaDurationSec - Set preload='metadata' before src for correct browser behavior - Localize audio processing status text with t()
…it config - Fall back to video container duration when mainBuffer is null but audioRegions are present (prevents empty output) - Use data.numberOfChannels as deinterleave stride instead of capped dataChannels (fixes garbled audio for 5.1+ sources) - Fix .coderabbit.yaml: '*' is not valid regex, use '.*'
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/components/video-editor/VideoEditor.tsx (2)
4176-4238:⚠️ Potential issue | 🟡 MinorStale
?? 99fallback contradicts the new 100% finalizing cap.At line 4237,
Math.round(exportFinalizingProgress ?? 99)is used inside theisExportFinalizingbranch. ButexportFinalizingProgress(lines 4176–4183) is always a number whenisExportFinalizingis true (the ternary returnsMath.min(...)which always returns a number). So the?? 99fallback is dead code and, more importantly, contradicts the stated intent of this PR that finalizing progress can now reach 100%. If the fallback ever did trigger, it would clamp the displayed percent back to 99, re‑introducing the exact UX bug this PR is meant to fix.🩹 Proposed fix
- : isExportFinalizing - ? t("editor.exportStatus.finalizingPercent", "Finalizing {{percent}}%", { - percent: Math.round(exportFinalizingProgress ?? 99), - }) + : isExportFinalizing + ? t("editor.exportStatus.finalizingPercent", "Finalizing {{percent}}%", { + percent: Math.round(exportFinalizingProgress ?? 100), + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/VideoEditor.tsx` around lines 4176 - 4238, The finalizing-percent fallback of "?? 99" is dead and reintroduces the 99% clamp; update the isExportFinalizing branch in exportPercentLabel to use Math.round(exportFinalizingProgress) (or Math.round(exportFinalizingProgress ?? 100) if you want a safe default) because exportFinalizingProgress (computed above) is always a number when isExportFinalizing is true; change the expression where Math.round(exportFinalizingProgress ?? 99) is used so it no longer forces 99 and instead reflects the actual exportFinalizingProgress value.
4455-4463:⚠️ Potential issue | 🟡 MinorAdd the missing i18n key to all locale bundles.
The code references
t("editor.export.processingAudioEdits", "Processing audio with speed/overlay edits"), but this key is not defined in any locale file across the five supported languages (en, es, ko, nl, zh-CN). The fallback English string will display for all users, but translations are missing. AddprocessingAudioEditsto theexportsection in allsrc/i18n/locales/*/dialogs.jsonfiles (where similar export-related keys likeexportingFormatandexportCompletealready exist) or to a newexportsection ineditor.jsonif preferred.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/VideoEditor.tsx` around lines 4455 - 4463, Add the missing i18n key referenced by the component: define "processingAudioEdits" under the "export" section for the "editor" namespace (key used is "editor.export.processingAudioEdits") in every locale bundle for en, es, ko, nl, and zh-CN; update each locale's dialogs.json (or editor.json if you prefer to centralize editor strings) to include "processingAudioEdits": "Processing audio with speed/overlay edits" (and the appropriate translations) so the t("editor.export.processingAudioEdits", ...) call resolves to localized text.
🧹 Nitpick comments (1)
src/lib/exporter/audioEncoder.ts (1)
1237-1258:sourceTimeToOutputTimenever returnsnull; tighten the return type.Every path in
sourceTimeToOutputTimereturns a number (0, a computed value, or the accumulatedoutputMs), so the declarednumber | nullreturn type is misleading. The?? 0/?? totalOutputDurationMsfallbacks at lines 821 and 823 are therefore unreachable and obscure what "missing mapping" actually means. Consider either returning a sentinel in a genuinely-unmapped case (e.g. whenslicesis empty) or simplifying the type tonumber.♻️ Proposed refactor
- private sourceTimeToOutputTime( - sourceMs: number, - slices: TimelineSlice[], - ): number | null { + private sourceTimeToOutputTime( + sourceMs: number, + slices: TimelineSlice[], + ): number {And at the call sites (lines ~821-823), drop the now-unreachable
??fallbacks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/exporter/audioEncoder.ts` around lines 1237 - 1258, The function sourceTimeToOutputTime currently always returns a numeric outputMs so update its signature to return number (not number | null), ensure it still returns 0 for empty or unmapped inputs (current logic suffices), and remove any null-coalescing fallbacks (e.g. "?? 0" or "?? totalOutputDurationMs") at its call sites that expect a nullable result; search for callers referencing sourceTimeToOutputTime and delete the unnecessary ?? expressions so they use the returned numeric value directly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/components/video-editor/VideoEditor.tsx`:
- Around line 4176-4238: The finalizing-percent fallback of "?? 99" is dead and
reintroduces the 99% clamp; update the isExportFinalizing branch in
exportPercentLabel to use Math.round(exportFinalizingProgress) (or
Math.round(exportFinalizingProgress ?? 100) if you want a safe default) because
exportFinalizingProgress (computed above) is always a number when
isExportFinalizing is true; change the expression where
Math.round(exportFinalizingProgress ?? 99) is used so it no longer forces 99 and
instead reflects the actual exportFinalizingProgress value.
- Around line 4455-4463: Add the missing i18n key referenced by the component:
define "processingAudioEdits" under the "export" section for the "editor"
namespace (key used is "editor.export.processingAudioEdits") in every locale
bundle for en, es, ko, nl, and zh-CN; update each locale's dialogs.json (or
editor.json if you prefer to centralize editor strings) to include
"processingAudioEdits": "Processing audio with speed/overlay edits" (and the
appropriate translations) so the t("editor.export.processingAudioEdits", ...)
call resolves to localized text.
---
Nitpick comments:
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 1237-1258: The function sourceTimeToOutputTime currently always
returns a numeric outputMs so update its signature to return number (not number
| null), ensure it still returns 0 for empty or unmapped inputs (current logic
suffices), and remove any null-coalescing fallbacks (e.g. "?? 0" or "??
totalOutputDurationMs") at its call sites that expect a nullable result; search
for callers referencing sourceTimeToOutputTime and delete the unnecessary ??
expressions so they use the returned numeric value directly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: b68e4179-7699-4774-adb1-87256d623086
📒 Files selected for processing (4)
src/components/video-editor/VideoEditor.tsxsrc/lib/exporter/audioEncoder.tssrc/lib/exporter/modernVideoExporter.tssrc/lib/exporter/videoExporter.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/lib/exporter/modernVideoExporter.ts
- src/lib/exporter/videoExporter.ts
- Change ?? 99 fallback to ?? 100 in finalizing progress label - Add editor.export.processingAudioEdits i18n key to all 5 locales - Tighten sourceTimeToOutputTime return type to number (was number|null) - Remove now-unused totalOutputDurationMs param from scheduleRegionForChunk - Remove ?? fallbacks at call sites (function always returns a number)
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/lib/exporter/audioEncoder.ts (1)
901-916: Nit: streaming vs bulk decode produce AudioBuffers at different sample rates.
streamDecodeFromUrlreturns anAudioBufferataudioConfig.sampleRate(source rate), whilebulkDecodeFromUrlforcesOFFLINE_AUDIO_SAMPLE_RATE(48 kHz) via the tempOfflineAudioContext. Both work because the render-timeOfflineAudioContextauto-resamples viaAudioBufferSourceNode, but resampling at render time is more expensive than doing it once at decode for sources with many re-scheduling passes (chunked rendering + multi-chunk overlays).Optional: pass an
OfflineAudioContextresampled target throughstreamDecodeFromUrlso both paths land on 48 kHz before render. Safe to defer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/exporter/audioEncoder.ts` around lines 901 - 916, decodeAudioFromUrl currently mixes sample rates because streamDecodeFromUrl returns an AudioBuffer at audioConfig.sampleRate while bulkDecodeFromUrl forces OFFLINE_AUDIO_SAMPLE_RATE; make them consistent by updating streamDecodeFromUrl to accept a target sample rate or an OfflineAudioContext and resample its decoded buffer to OFFLINE_AUDIO_SAMPLE_RATE before returning, or alternatively resample the buffer returned by streamDecodeFromUrl inside decodeAudioFromUrl to OFFLINE_AUDIO_SAMPLE_RATE; reference streamDecodeFromUrl, bulkDecodeFromUrl, decodeAudioFromUrl, OFFLINE_AUDIO_SAMPLE_RATE and audioConfig.sampleRate when implementing the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 1345-1377: createWavHeader currently writes 32-bit chunk sizes
which overflow for very long exports; modify createWavHeader to detect when
dataSize or (headerSize - 8 + dataSize) exceeds 0xFFFFFFFF and, in that case,
emit an RF64 header instead of standard RIFF: write "RF64" at byte 0, set the
RIFF chunk size and data chunk size fields to 0xFFFFFFFF, add a "ds64" chunk
immediately after the "fmt " chunk containing 64-bit sizes for RIFF-size and
dataSize (and sample-count if desired), and then continue with the "WAVE"/"fmt
"/"data" layout; keep the original 32-bit RIFF/WAVE header behavior when sizes
fit in uint32. Ensure you update createWavHeader's logic to branch on the
overflow condition and produce the proper RF64/ds64 fields so players that
support RF64 can read very large files.
- Around line 1153-1180: The timeout path leaves the video element's src set, so
update the Promise in audioEncoder.ts (the block using timeout, onLoaded,
onError, cleanup and media) to also clear the media.src and call media.load()
when the timeout fires; the simplest fix is to move the media.src = "" and
media.load() logic into the cleanup() function (which is used by
timeout/onLoaded/onError) so the src is always cleared regardless of which path
completes, ensuring safe idempotent calls.
---
Nitpick comments:
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 901-916: decodeAudioFromUrl currently mixes sample rates because
streamDecodeFromUrl returns an AudioBuffer at audioConfig.sampleRate while
bulkDecodeFromUrl forces OFFLINE_AUDIO_SAMPLE_RATE; make them consistent by
updating streamDecodeFromUrl to accept a target sample rate or an
OfflineAudioContext and resample its decoded buffer to OFFLINE_AUDIO_SAMPLE_RATE
before returning, or alternatively resample the buffer returned by
streamDecodeFromUrl inside decodeAudioFromUrl to OFFLINE_AUDIO_SAMPLE_RATE;
reference streamDecodeFromUrl, bulkDecodeFromUrl, decodeAudioFromUrl,
OFFLINE_AUDIO_SAMPLE_RATE and audioConfig.sampleRate when implementing the
change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 2a489ec9-3d1c-4d1a-89c0-f837d2e2dac8
📒 Files selected for processing (7)
src/components/video-editor/VideoEditor.tsxsrc/i18n/locales/en/editor.jsonsrc/i18n/locales/es/editor.jsonsrc/i18n/locales/ko/editor.jsonsrc/i18n/locales/nl/editor.jsonsrc/i18n/locales/zh-CN/editor.jsonsrc/lib/exporter/audioEncoder.ts
✅ Files skipped from review due to trivial changes (5)
- src/i18n/locales/es/editor.json
- src/i18n/locales/ko/editor.json
- src/i18n/locales/en/editor.json
- src/i18n/locales/zh-CN/editor.json
- src/i18n/locales/nl/editor.json
| return await new Promise<number>((resolve, reject) => { | ||
| const timeout = setTimeout(() => { | ||
| cleanup(); | ||
| reject(new Error("Timed out getting media duration (30s)")); | ||
| }, 30_000); | ||
|
|
||
| const recordedBlobPromise = new Promise<Blob>((resolve, reject) => { | ||
| recorder.ondataavailable = (event: BlobEvent) => { | ||
| if (event.data && event.data.size > 0) { | ||
| chunks.push(event.data); | ||
| } | ||
| }; | ||
| recorder.onerror = () => { | ||
| reject(new Error("MediaRecorder failed while capturing speed-adjusted audio")); | ||
| }; | ||
| recorder.onstop = () => { | ||
| const type = mimeType || chunks[0]?.type || "audio/webm"; | ||
| resolve(new Blob(chunks, { type })); | ||
| }; | ||
| }); | ||
| const onLoaded = () => { | ||
| cleanup(); | ||
| const duration = media.duration; | ||
| media.src = ""; | ||
| media.load(); | ||
| resolve(Number.isFinite(duration) ? duration : 0); | ||
| }; | ||
| const onError = () => { | ||
| cleanup(); | ||
| media.src = ""; | ||
| media.load(); | ||
| reject(new Error("Failed to get media duration")); | ||
| }; | ||
| const cleanup = () => { | ||
| clearTimeout(timeout); | ||
| media.removeEventListener("loadedmetadata", onLoaded); | ||
| media.removeEventListener("error", onError); | ||
| }; | ||
|
|
||
| media.addEventListener("loadedmetadata", onLoaded); | ||
| media.addEventListener("error", onError, { once: true }); | ||
| }); |
There was a problem hiding this comment.
Minor: timeout path leaks <video> source.
If the 30s timeout fires, cleanup() only removes the listeners — the video element's src is never cleared (unlike the onLoaded/onError paths which reset it and call load()). source.revoke() in the outer finally releases the blob URL, but the <video> element itself still holds a reference to the revoked URL until GC.
🩹 Proposed fix
const timeout = setTimeout(() => {
cleanup();
+ media.src = "";
+ media.load();
reject(new Error("Timed out getting media duration (30s)"));
}, 30_000);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return await new Promise<number>((resolve, reject) => { | |
| const timeout = setTimeout(() => { | |
| cleanup(); | |
| reject(new Error("Timed out getting media duration (30s)")); | |
| }, 30_000); | |
| const recordedBlobPromise = new Promise<Blob>((resolve, reject) => { | |
| recorder.ondataavailable = (event: BlobEvent) => { | |
| if (event.data && event.data.size > 0) { | |
| chunks.push(event.data); | |
| } | |
| }; | |
| recorder.onerror = () => { | |
| reject(new Error("MediaRecorder failed while capturing speed-adjusted audio")); | |
| }; | |
| recorder.onstop = () => { | |
| const type = mimeType || chunks[0]?.type || "audio/webm"; | |
| resolve(new Blob(chunks, { type })); | |
| }; | |
| }); | |
| const onLoaded = () => { | |
| cleanup(); | |
| const duration = media.duration; | |
| media.src = ""; | |
| media.load(); | |
| resolve(Number.isFinite(duration) ? duration : 0); | |
| }; | |
| const onError = () => { | |
| cleanup(); | |
| media.src = ""; | |
| media.load(); | |
| reject(new Error("Failed to get media duration")); | |
| }; | |
| const cleanup = () => { | |
| clearTimeout(timeout); | |
| media.removeEventListener("loadedmetadata", onLoaded); | |
| media.removeEventListener("error", onError); | |
| }; | |
| media.addEventListener("loadedmetadata", onLoaded); | |
| media.addEventListener("error", onError, { once: true }); | |
| }); | |
| return await new Promise<number>((resolve, reject) => { | |
| const timeout = setTimeout(() => { | |
| cleanup(); | |
| media.src = ""; | |
| media.load(); | |
| reject(new Error("Timed out getting media duration (30s)")); | |
| }, 30_000); | |
| const onLoaded = () => { | |
| cleanup(); | |
| const duration = media.duration; | |
| media.src = ""; | |
| media.load(); | |
| resolve(Number.isFinite(duration) ? duration : 0); | |
| }; | |
| const onError = () => { | |
| cleanup(); | |
| media.src = ""; | |
| media.load(); | |
| reject(new Error("Failed to get media duration")); | |
| }; | |
| const cleanup = () => { | |
| clearTimeout(timeout); | |
| media.removeEventListener("loadedmetadata", onLoaded); | |
| media.removeEventListener("error", onError); | |
| }; | |
| media.addEventListener("loadedmetadata", onLoaded); | |
| media.addEventListener("error", onError, { once: true }); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/exporter/audioEncoder.ts` around lines 1153 - 1180, The timeout path
leaves the video element's src set, so update the Promise in audioEncoder.ts
(the block using timeout, onLoaded, onError, cleanup and media) to also clear
the media.src and call media.load() when the timeout fires; the simplest fix is
to move the media.src = "" and media.load() logic into the cleanup() function
(which is used by timeout/onLoaded/onError) so the src is always cleared
regardless of which path completes, ensuring safe idempotent calls.
| private createWavHeader( | ||
| sampleRate: number, | ||
| numChannels: number, | ||
| totalFrames: number, | ||
| ): ArrayBuffer { | ||
| const bytesPerSample = 2; // 16-bit PCM | ||
| const dataSize = totalFrames * numChannels * bytesPerSample; | ||
| const headerSize = 44; | ||
| const header = new ArrayBuffer(headerSize); | ||
| const view = new DataView(header); | ||
|
|
||
| const writeString = (offset: number, str: string) => { | ||
| for (let i = 0; i < str.length; i++) { | ||
| view.setUint8(offset + i, str.charCodeAt(i)); | ||
| } | ||
| }; | ||
|
|
||
| timeoutId = setTimeout(onTimeout, 30_000); | ||
| media.addEventListener("loadedmetadata", onLoaded); | ||
| media.addEventListener("error", onError, { once: true }); | ||
| }); | ||
| writeString(0, "RIFF"); | ||
| view.setUint32(4, headerSize - 8 + dataSize, true); | ||
| writeString(8, "WAVE"); | ||
| writeString(12, "fmt "); | ||
| view.setUint32(16, 16, true); | ||
| view.setUint16(20, 1, true); // PCM format | ||
| view.setUint16(22, numChannels, true); | ||
| view.setUint32(24, sampleRate, true); | ||
| view.setUint32(28, sampleRate * numChannels * bytesPerSample, true); | ||
| view.setUint16(32, numChannels * bytesPerSample, true); | ||
| view.setUint16(34, bytesPerSample * 8, true); | ||
| writeString(36, "data"); | ||
| view.setUint32(40, dataSize, true); | ||
|
|
||
| return header; | ||
| } |
There was a problem hiding this comment.
Minor: WAV RIFF/data sizes overflow uint32 on very long exports.
createWavHeader writes dataSize and headerSize - 8 + dataSize as little-endian uint32. At 48 kHz, 16-bit stereo that's ~6.2 h before wraparound (~4 GB). For shorter sessions this never triggers, but exports longer than that will produce a WAV with bogus chunk sizes. Worth either documenting the limit or switching to RF64/WAVE_FORMAT_EXTENSIBLE when dataSize exceeds 0xFFFFFFF0.
Not a blocker given typical recording lengths, but flagging since the new pipeline is designed to unblock 30+ minute exports.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/exporter/audioEncoder.ts` around lines 1345 - 1377, createWavHeader
currently writes 32-bit chunk sizes which overflow for very long exports; modify
createWavHeader to detect when dataSize or (headerSize - 8 + dataSize) exceeds
0xFFFFFFFF and, in that case, emit an RF64 header instead of standard RIFF:
write "RF64" at byte 0, set the RIFF chunk size and data chunk size fields to
0xFFFFFFFF, add a "ds64" chunk immediately after the "fmt " chunk containing
64-bit sizes for RIFF-size and dataSize (and sample-count if desired), and then
continue with the "WAVE"/"fmt "/"data" layout; keep the original 32-bit
RIFF/WAVE header behavior when sizes fit in uint32. Ensure you update
createWavHeader's logic to branch on the overflow condition and produce the
proper RF64/ds64 fields so players that support RF64 can read very large files.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/lib/exporter/audioEncoder.ts (1)
904-1085: Optional: avoid re-downloading on bulk-decode fallback.
streamDecodeFromUrlalready fetches the full URL at line 925-926 to build aFilefor WebDemuxer. When streaming decode throws (unsupported codec config, corrupt frame, etc.),decodeAudioFromUrlcallsbulkDecodeFromUrl, which at line 1132-1133 fetches the same URL a second time. For a 500 MB recording this doubles network/disk usage on every fallback.Consider sharing the already-fetched
ArrayBuffer(orBlob) with the fallback path, e.g., by havingstreamDecodeFromUrlhand it back on failure anddecodeAudioFromUrlpass it intobulkDecodeFromUrlinstead of fetching again.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/exporter/audioEncoder.ts` around lines 904 - 1085, decodeAudioFromUrl currently calls streamDecodeFromUrl which fetches the media (creating a Blob/File) but on failure calls bulkDecodeFromUrl which refetches the same URL; avoid double-download by changing streamDecodeFromUrl to return the fetched Blob/ArrayBuffer alongside null AudioBuffer on failure (e.g., return { audioBuffer: AudioBuffer|null, fallbackData: Blob|ArrayBuffer|null }), update decodeAudioFromUrl to accept and pass that fallbackData into bulkDecodeFromUrl (add an overload/optional param to bulkDecodeFromUrl to accept a Blob/ArrayBuffer and skip fetching), and update bulkDecodeFromUrl to use the provided data when present instead of fetching again; refer to decodeAudioFromUrl, streamDecodeFromUrl, bulkDecodeFromUrl, and the local fetch/blob/File usage in streamDecodeFromUrl to locate changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 709-732: The current renderToWavBlobChunked builds a WAV header
for totalFrames then calls renderChunked, but if this.cancelled is set during
renderChunked the function still returns a Blob whose header claims more data
than was written; change renderToWavBlobChunked to detect cancellation after
renderChunked (check this.cancelled or the renderChunked result) and instead of
returning a Blob throw a distinct cancellation error (e.g., new Error("Export
cancelled") or a CancelledError) so callers of renderToWavBlobChunked can
differentiate cancellation from a successful export; reference symbols:
renderToWavBlobChunked, renderChunked, this.cancelled, pcmParts,
createWavHeader, totalFrames.
- Around line 959-1012: The decoder output handler in AudioDecoder.output
(inside audioEncoder.ts) must treat a null/unknown AudioData.format as a hard
decode error instead of silently skipping both planar/interleaved branches: add
a guard that checks format explicitly (e.g., if (!format) throw or call the
existing onDecodeError path) so you never increment totalFrames or push silence
when format is missing (functions/variables: AudioDecoder output callback,
data.format, totalFrames, channelChunks, rawToFloat32). In
renderToWavBlobChunked / renderChunked ensure cancellation does not return a WAV
with a prewritten header claiming more frames than were produced: either
postpone writing the WAV header until after rendering totalFrames is known or,
on cancellation, throw/return null instead of returning the Blob
(functions/variables: renderToWavBlobChunked, renderChunked, totalFrames,
pcmParts). Finally, avoid double network fetch by refactoring
streamDecodeFromUrl/decodeAudioFromUrl/bulkDecodeFromUrl to accept an optional
pre-fetched ArrayBuffer/Blob and, on streamDecodeFromUrl failure, pass that same
buffer into bulkDecodeFromUrl rather than letting bulkDecodeFromUrl fetch again
(functions/variables: streamDecodeFromUrl, decodeAudioFromUrl,
bulkDecodeFromUrl).
---
Nitpick comments:
In `@src/lib/exporter/audioEncoder.ts`:
- Around line 904-1085: decodeAudioFromUrl currently calls streamDecodeFromUrl
which fetches the media (creating a Blob/File) but on failure calls
bulkDecodeFromUrl which refetches the same URL; avoid double-download by
changing streamDecodeFromUrl to return the fetched Blob/ArrayBuffer alongside
null AudioBuffer on failure (e.g., return { audioBuffer: AudioBuffer|null,
fallbackData: Blob|ArrayBuffer|null }), update decodeAudioFromUrl to accept and
pass that fallbackData into bulkDecodeFromUrl (add an overload/optional param to
bulkDecodeFromUrl to accept a Blob/ArrayBuffer and skip fetching), and update
bulkDecodeFromUrl to use the provided data when present instead of fetching
again; refer to decodeAudioFromUrl, streamDecodeFromUrl, bulkDecodeFromUrl, and
the local fetch/blob/File usage in streamDecodeFromUrl to locate changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 0dd6885c-8e95-4f02-a8ea-9d4a49b76d8e
📒 Files selected for processing (1)
src/lib/exporter/audioEncoder.ts
| private async renderToWavBlobChunked( | ||
| prepared: PreparedOfflineRender, | ||
| ): Promise<Blob> { | ||
| const totalOutputSec = Math.max(prepared.outputDurationMs / 1000, 0.01); | ||
| const totalFrames = Math.ceil(totalOutputSec * OFFLINE_AUDIO_SAMPLE_RATE); | ||
| const numChannels = prepared.numChannels; | ||
|
|
||
| const header = this.createWavHeader( | ||
| OFFLINE_AUDIO_SAMPLE_RATE, | ||
| numChannels, | ||
| totalFrames, | ||
| ); | ||
| const pcmParts: ArrayBuffer[] = [header]; | ||
|
|
||
| await this.renderChunked( | ||
| prepared, | ||
| totalOutputSec, | ||
| async (rendered) => { | ||
| pcmParts.push(...this.audioBufferToPcmParts(rendered)); | ||
| }, | ||
| ); | ||
|
|
||
| return new Blob(pcmParts, { type: "audio/wav" }); | ||
| } |
There was a problem hiding this comment.
Cancellation mid-render produces a malformed WAV blob.
If this.cancelled becomes true during renderChunked (line 799 breaks out of the chunk loop), this method still falls through to return new Blob(pcmParts, ...). The header was sized from totalFrames = ceil(totalOutputSec * sampleRate) at line 713, but pcmParts now contains fewer PCM bytes than the header claims — the returned WAV advertises data past EOF and any downstream consumer (FFmpeg/native export) will read garbage or error out on a partial file.
Preferable to surface cancellation as a thrown error so the caller can distinguish it from a successful export:
🩹 Suggested fix
await this.renderChunked(
prepared,
totalOutputSec,
async (rendered) => {
pcmParts.push(...this.audioBufferToPcmParts(rendered));
},
);
+ if (this.cancelled) {
+ throw new Error("Export cancelled");
+ }
+
return new Blob(pcmParts, { type: "audio/wav" });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private async renderToWavBlobChunked( | |
| prepared: PreparedOfflineRender, | |
| ): Promise<Blob> { | |
| const totalOutputSec = Math.max(prepared.outputDurationMs / 1000, 0.01); | |
| const totalFrames = Math.ceil(totalOutputSec * OFFLINE_AUDIO_SAMPLE_RATE); | |
| const numChannels = prepared.numChannels; | |
| const header = this.createWavHeader( | |
| OFFLINE_AUDIO_SAMPLE_RATE, | |
| numChannels, | |
| totalFrames, | |
| ); | |
| const pcmParts: ArrayBuffer[] = [header]; | |
| await this.renderChunked( | |
| prepared, | |
| totalOutputSec, | |
| async (rendered) => { | |
| pcmParts.push(...this.audioBufferToPcmParts(rendered)); | |
| }, | |
| ); | |
| return new Blob(pcmParts, { type: "audio/wav" }); | |
| } | |
| private async renderToWavBlobChunked( | |
| prepared: PreparedOfflineRender, | |
| ): Promise<Blob> { | |
| const totalOutputSec = Math.max(prepared.outputDurationMs / 1000, 0.01); | |
| const totalFrames = Math.ceil(totalOutputSec * OFFLINE_AUDIO_SAMPLE_RATE); | |
| const numChannels = prepared.numChannels; | |
| const header = this.createWavHeader( | |
| OFFLINE_AUDIO_SAMPLE_RATE, | |
| numChannels, | |
| totalFrames, | |
| ); | |
| const pcmParts: ArrayBuffer[] = [header]; | |
| await this.renderChunked( | |
| prepared, | |
| totalOutputSec, | |
| async (rendered) => { | |
| pcmParts.push(...this.audioBufferToPcmParts(rendered)); | |
| }, | |
| ); | |
| if (this.cancelled) { | |
| throw new Error("Export cancelled"); | |
| } | |
| return new Blob(pcmParts, { type: "audio/wav" }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/exporter/audioEncoder.ts` around lines 709 - 732, The current
renderToWavBlobChunked builds a WAV header for totalFrames then calls
renderChunked, but if this.cancelled is set during renderChunked the function
still returns a Blob whose header claims more data than was written; change
renderToWavBlobChunked to detect cancellation after renderChunked (check
this.cancelled or the renderChunked result) and instead of returning a Blob
throw a distinct cancellation error (e.g., new Error("Export cancelled") or a
CancelledError) so callers of renderToWavBlobChunked can differentiate
cancellation from a successful export; reference symbols:
renderToWavBlobChunked, renderChunked, this.cancelled, pcmParts,
createWavHeader, totalFrames.
| const decoder = new AudioDecoder({ | ||
| output: (data: AudioData) => { | ||
| try { | ||
| const frames = data.numberOfFrames; | ||
| const dataChannels = Math.min( | ||
| data.numberOfChannels, | ||
| numChannels, | ||
| ); | ||
| const format = data.format; | ||
|
|
||
| if (format?.includes("planar")) { | ||
| for (let ch = 0; ch < dataChannels; ch++) { | ||
| const size = data.allocationSize({ | ||
| planeIndex: ch, | ||
| }); | ||
| const bytes = new ArrayBuffer(size); | ||
| data.copyTo(bytes, { planeIndex: ch }); | ||
| channelChunks[ch].push( | ||
| this.rawToFloat32(bytes, format, frames), | ||
| ); | ||
| } | ||
| } else { | ||
| if (!audioEl.paused) { | ||
| audioEl.pause(); | ||
| } else if (format) { | ||
| // Interleaved format — deinterleave into per-channel arrays. | ||
| // Use data.numberOfChannels as stride (not capped dataChannels) | ||
| // since the raw buffer contains all source channels. | ||
| const srcChannels = data.numberOfChannels; | ||
| const size = data.allocationSize({ planeIndex: 0 }); | ||
| const bytes = new ArrayBuffer(size); | ||
| data.copyTo(bytes, { planeIndex: 0 }); | ||
| const interleaved = this.rawToFloat32( | ||
| bytes, | ||
| format, | ||
| frames * srcChannels, | ||
| ); | ||
| for (let ch = 0; ch < dataChannels; ch++) { | ||
| const chData = new Float32Array(frames); | ||
| for (let i = 0; i < frames; i++) { | ||
| chData[i] = | ||
| interleaved[i * srcChannels + ch]; | ||
| } | ||
| channelChunks[ch].push(chData); | ||
| } | ||
| } | ||
|
|
||
| // Fill missing channels with silence | ||
| for (let ch = dataChannels; ch < numChannels; ch++) { | ||
| channelChunks[ch].push(new Float32Array(frames)); | ||
| } | ||
|
|
||
| totalFrames += frames; | ||
| } finally { | ||
| data.close(); | ||
| } | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
WebCodecs AudioData format nullable AudioSampleFormat spec
💡 Result:
In the WebCodecs specification, the AudioData interface's format attribute is defined as format, of type AudioSampleFormat, readonly, nullable. This means the format property can be null. AudioSampleFormat is an enumeration of audio sample formats: "u8", "u8-planar", "s16", "s16-planar", "s32", "s32-planar", "f32", "f32-planar". These represent interleaved or planar layouts for 8-bit unsigned integer (u8), 16-bit/32-bit signed integer (s16/s32), and 32-bit float (f32) samples. The format is required and non-nullable in the AudioDataInit dictionary used to construct AudioData, but the readonly attribute on the AudioData instance itself is nullable, likely becoming null after close releases the media resource or in a detached state.
Citations:
- 1: https://www.w3.org/TR/webcodecs/
- 2: https://www.w3.org/TR/webcodecs/Overview.html
- 3: https://w3c.github.io/webcodecs/
- 4: https://developer.mozilla.org/en-US/docs/Web/API/AudioData/format
🏁 Script executed:
# Check the file size first
wc -l src/lib/exporter/audioEncoder.tsRepository: webadderall/Recordly
Length of output: 101
🏁 Script executed:
# Examine the lines around Issue A (null format) at 959-1012
sed -n '950,1020p' src/lib/exporter/audioEncoder.tsRepository: webadderall/Recordly
Length of output: 2150
🏁 Script executed:
# Examine Issue B (cancellation in renderToWavBlobChunked) at 709-732
sed -n '700,740p' src/lib/exporter/audioEncoder.tsRepository: webadderall/Recordly
Length of output: 1220
🏁 Script executed:
# Examine Issue C (double fetch) at 904-916 and 1125-1143
sed -n '900,920p' src/lib/exporter/audioEncoder.ts
sed -n '1120,1150p' src/lib/exporter/audioEncoder.tsRepository: webadderall/Recordly
Length of output: 1935
🏁 Script executed:
# Get the full streamDecodeFromUrl method (issue C - double fetch)
sed -n '920,1085p' src/lib/exporter/audioEncoder.ts | head -200Repository: webadderall/Recordly
Length of output: 4519
🏁 Script executed:
# Get the full renderChunked method to check cancellation handling (issue B)
sed -n '736,850p' src/lib/exporter/audioEncoder.tsRepository: webadderall/Recordly
Length of output: 3080
Handle null format to prevent per-channel array desync.
AudioData.format is nullable per the WebCodecs spec and becomes null in detached states. When the decoder output callback receives format === null (or any value not matching "planar" or truthy), neither the planar nor interleaved branch executes, but the silence-fill loop for channels dataChannels..numChannels and the totalFrames += frames increment at lines 1003–1008 still run. This leaves per-channel channelChunks arrays with fewer samples than totalFrames claims, causing misalignment when assembling the final AudioBuffer at lines 1067–1073. The result is silent data corruption—missing or truncated samples—rather than a decode failure.
Treat an unhandled format as a hard decode error:
Suggested fix
if (format?.includes("planar")) {
// ... planar branch
} else if (format) {
// ... interleaved branch
+ } else {
+ decodeError = new Error(
+ "Streaming audio decode error: AudioData.format is null",
+ );
+ return;
}
// Fill missing channels with silence
for (let ch = dataChannels; ch < numChannels; ch++) {
channelChunks[ch].push(new Float32Array(frames));
}
totalFrames += frames;Cancellation mid-render produces malformed WAV with mismatched header.
In renderToWavBlobChunked, the WAV header is created with totalFrames before rendering begins (lines 719–723). The renderChunked loop respects this.cancelled and breaks early (line 737), but if cancellation occurs, fewer PCM chunks are accumulated in pcmParts than the header declares. The returned Blob is then malformed—the WAV metadata claims more audio data than actually present. Throw or return null on cancellation instead of returning a truncated blob.
Double network fetch on streaming decode fallback.
When streamDecodeFromUrl fails, decodeAudioFromUrl (line 904) catches the error and calls bulkDecodeFromUrl. Both methods fetch the same URL: streamDecodeFromUrl at line 927 and bulkDecodeFromUrl at line ~1137. For large files, this doubles bandwidth. Share the fetched blob/ArrayBuffer between the two paths, or refactor to pass a pre-fetched buffer to bulkDecodeFromUrl.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/exporter/audioEncoder.ts` around lines 959 - 1012, The decoder output
handler in AudioDecoder.output (inside audioEncoder.ts) must treat a
null/unknown AudioData.format as a hard decode error instead of silently
skipping both planar/interleaved branches: add a guard that checks format
explicitly (e.g., if (!format) throw or call the existing onDecodeError path) so
you never increment totalFrames or push silence when format is missing
(functions/variables: AudioDecoder output callback, data.format, totalFrames,
channelChunks, rawToFloat32). In renderToWavBlobChunked / renderChunked ensure
cancellation does not return a WAV with a prewritten header claiming more frames
than were produced: either postpone writing the WAV header until after rendering
totalFrames is known or, on cancellation, throw/return null instead of returning
the Blob (functions/variables: renderToWavBlobChunked, renderChunked,
totalFrames, pcmParts). Finally, avoid double network fetch by refactoring
streamDecodeFromUrl/decodeAudioFromUrl/bulkDecodeFromUrl to accept an optional
pre-fetched ArrayBuffer/Blob and, on streamDecodeFromUrl failure, pass that same
buffer into bulkDecodeFromUrl rather than letting bulkDecodeFromUrl fetch again
(functions/variables: streamDecodeFromUrl, decodeAudioFromUrl,
bulkDecodeFromUrl).
Summary
Replaces the real-time audio export pipeline with a streaming decode + chunked offline rendering approach. Fixes export timeouts, OOM crashes on long recordings, progress stuck at 99%, and silently swallowed muxing errors.
Problem
The previous export pipeline had several critical issues:
decodeAudioData()loaded entire compressed file + decoded PCM simultaneously (~1.4GB for 30min recording)audioBufferToWavBloballocated one ArrayBuffer for the entire output (~345MB for 30min)Math.min(renderProgress, 99)prevented completionpendingMuxing.catch()only logged, never propagatedChanges
Streaming Decode (
decodeAudioFromUrl+streamDecodeFromUrl)decodeAudioData()if streaming failsChunked Offline Rendering (
renderChunked)OfflineAudioContextchunksscheduleBufferThroughTimelineextended with chunk windowing paramsChunked WAV Writing (
createWavHeader+audioBufferToPcmParts)Chunked Encoding (
renderAndEncodeChunked+feedBufferToEncoder)Other Fixes
Math.min(renderProgress, 99)→Math.min(renderProgress, 100)in both exporters + UIthis.encoderError+this.cancelled = trueinstead of silent logTesting
tsc --noEmit)Summary by CodeRabbit
Bug Fixes
Improvements