Conversation
Signed-off-by: smarcet <smarcet@gmail.com>
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: smarcet <smarcet@gmail.com>
…king Move _asyncProcessing flag before dropzoneOnLoad so chunksUploaded callback sees it (regression from Iteration 2 refactor). Replace _maxProgress guard with _completedBytes floor for accurate progress that never oscillates. Add tests for progress monotonicity.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/components/inputs/dropzone/index.js (3)
355-368:⚠️ Potential issue | 🟡 MinorGuard against
file.size === 0to avoidNaNwidth.
effectiveBytes / file.size * 100producesNaNwhenfile.sizeis0(orundefined), which then becomes"NaN%"on the progress element. Add a zero-check:🛡️ Proposed fix
- const effectiveBytes = Math.max(bytesSent, file._completedBytes || 0); - progress = Math.min(effectiveBytes / file.size * 100, 100); + const effectiveBytes = Math.max(bytesSent, file._completedBytes || 0); + progress = file.size > 0 + ? Math.min(effectiveBytes / file.size * 100, 100) + : 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/index.js` around lines 355 - 368, The upload progress handler (this.dropzone.on('uploadprogress')) can produce NaN when file.size is 0 or undefined; guard file.size by computing progress only when file.size > 0 and otherwise set progress to 0 (or 100 if you consider completedBytes equals size) before using it to set the preview width. Update the logic around effectiveBytes / file.size * 100 to check file.size and use a fallback progress value, and keep references to file._completedBytes, effectiveBytes, progress and file.previewElement so the element width assignment still uses a valid percentage string.
78-114:⚠️ Potential issue | 🟠 MajorAsync
setIntervalcallback can overlap itself, producing overlappingfetchcalls.The callback is
asyncand performsawait getAccessToken()→await fetch()→await response.json(). If any of those take longer than the 2 s tick (slow network, token refresh, backend stall), the next tick fires while the previous is still in flight, producing duplicate status requests and racingclearIntervalcalls on the same id. Either switch to a self-reschedulingsetTimeoutpattern (schedule the next tick only after the previous awaited work resolves), or guard with aninFlightflag on the file.🛡️ Recommended pattern
const tick = async () => { attempts++; if (attempts > maxAttempts) { /* ... */ return; } try { // ... await work ... if (!done) setTimeout(tick, 2000); } catch (e) { /* ... */ } }; setTimeout(tick, 2000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/index.js` around lines 78 - 114, The async setInterval callback can overlap itself causing duplicate fetches and racing clears; replace the setInterval-based polling around this._pollInterval with a self-rescheduling async tick using setTimeout (or add a per-file inFlight guard) so each tick awaits getAccessToken(), fetch(statusUrl) and response.json() to completion before scheduling the next call; preserve existing logic for attempts > maxAttempts, clearing file._pollingActive, calling file._chunksUploadedDone(), and invoking this.onUploadComplete / this.onError, and ensure this._pollInterval is set/cleared consistently (or store the timeout id if you keep using a timer).
67-115:⚠️ Potential issue | 🔴 CriticalCritical:
this._pollIntervalis shared across files — concurrent 202 uploads will leak intervals and clear the wrong one.
_pollIntervalis a single instance field. If two files return HTTP 202 (e.g.,maxFiles > 1or multiple dropzones in one instance), the secondpollUploadStatuscall overwritesthis._pollIntervalbefore the first has finished, and every subsequentclearInterval(this._pollInterval)(on timeout/error/complete/unmount) only affects whichever interval is currently stored — the other one keeps polling forever (or gets clobbered mid-flight). Thefile._pollingActiveguard only prevents re-entering polling for the same file; it does nothing for different files.Store the interval id on the file itself (it already carries
_pollingActive/_chunksUploadedDone) and track active intervals in aMap/SetsocomponentWillUnmountcan clear them all.🔒 Suggested direction
- pollUploadStatus(fileId, baseUrl, file) { + pollUploadStatus(fileId, baseUrl, file) { // Guard against multiple polling intervals for the same file if (file._pollingActive) { return; } file._pollingActive = true; const statusUrl = `${baseUrl}/status/${fileId}`; const maxAttempts = 300; // 10 minutes at 2s intervals let attempts = 0; - this._pollInterval = setInterval(async () => { + const intervalId = setInterval(async () => { attempts++; if (attempts > maxAttempts) { - clearInterval(this._pollInterval); - this._pollInterval = null; + clearInterval(intervalId); + this._pollIntervals?.delete(intervalId); file._pollingActive = false; this.onError({ message: 'Upload timed out' }); return; } // ... use intervalId in every clearInterval site ... }, 2000); + (this._pollIntervals ||= new Set()).add(intervalId); }And in
componentWillUnmount, iteratethis._pollIntervalsinstead of the single_pollInterval.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/index.js` around lines 67 - 115, The bug is that pollUploadStatus uses a single this._pollInterval for all files, causing intervals to be overwritten and leaked; change pollUploadStatus to attach the interval id to the specific file (e.g., file._pollIntervalId) and maintain a container on the component (e.g., this._pollIntervals = new Set() or Map()) to track active intervals; when creating the interval assign its id to file._pollIntervalId and add it to this._pollIntervals, replace every clearInterval(this._pollInterval) with clearInterval(file._pollIntervalId) plus removal from this._pollIntervals and nulling file._pollIntervalId/file._pollingActive, and update componentWillUnmount to iterate this._pollIntervals and clear them all so no per-file polling can leak.
🧹 Nitpick comments (6)
src/components/inputs/dropzone/__tests__/dropzone.test.js (3)
148-149: Stale TDD comment.The comment "THIS TEST SHOULD FAIL INITIALLY (demonstrating the bug)" is a leftover from the red-phase of TDD. Now that the fix is in place, this test is expected to pass and the note is misleading for future readers.
📝 Proposed edit
- * - * THIS TEST SHOULD FAIL INITIALLY (demonstrating the bug) */🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/__tests__/dropzone.test.js` around lines 148 - 149, Remove or update the stale TDD comment "THIS TEST SHOULD FAIL INITIALLY (demonstrating the bug)" in the dropzone.test.js file; find the comment near the test block (in the Dropzone tests) and either delete it or replace it with an accurate note stating the test now passes, ensuring comments around the relevant test(s) reflect current behavior (e.g., in the describe/it block for the Dropzone component).
283-294:DropzoneMock.mockImplementationleaks across tests within this describe block.
jest.clearAllMocks()inbeforeEachclears call history but does not resetmockImplementation. The override set here (and again at line 371) persists for subsequent tests and — since both describe blocks share the same module-leveljest.mock('dropzone', …)factory — can also bleed into re-runs if this file is re-entered. Either move themockImplementationintobeforeEach, or callDropzoneMock.mockReset()/mockRestore()inafterEachto avoid hidden coupling between tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/__tests__/dropzone.test.js` around lines 283 - 294, The test-suite-level DropzoneMock.mockImplementation in the describe block leaks across tests because jest.clearAllMocks() doesn't reset implementations; move the mock implementation into the beforeEach (so DropzoneMock.mockImplementation(...) is called per-test) or add DropzoneMock.mockReset() or DropzoneMock.mockRestore() in afterEach to clear the implementation; update references to DropzoneMock.mockImplementation, beforeEach, and afterEach in the test file so each test gets a fresh mock implementation and no state bleeds between tests.
191-240: Test 4 is timing-fragile and may be flaky in CI.
pollUploadStatususessetInterval(..., 2000). The test waits exactly 2500 ms, leaving only a 500 ms window for the async callback (token fetch +fetch+response.json()+ assertions) to complete after the interval fires. On a loaded CI runner this can occasionally slip, producing a false failure. Consider either using Jest fake timers (jest.useFakeTimers()+jest.advanceTimersByTimeAsync(2000)thenawait flushPromises()), or refactoringpollUploadStatusto accept a configurable interval so tests can pass a much smaller value (e.g. 10 ms).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/__tests__/dropzone.test.js` around lines 191 - 240, The test is flaky because pollUploadStatus uses a hard-coded 2000ms setInterval and the test only waits 2500ms; update the test to use Jest fake timers and advance them instead or make pollUploadStatus accept a configurable interval to speed up polling in tests. Fix by either (A) in the test call jest.useFakeTimers(), call instance.pollUploadStatus(...) then await Promise.resolve/flushPromises and run jest.advanceTimersByTimeAsync(2000) (or advanceTimersByTime and then await flushPromises) before asserting, or (B) refactor the DropzoneJS.pollUploadStatus signature to accept an optional intervalMs (default 2000) and in the test call instance.pollUploadStatus(..., 10) so you can wait a small timeout; reference pollUploadStatus and the test name test_dropzone_202_polling_complete_fires_success when making the change.src/components/inputs/dropzone/index.js (3)
464-466: AddmaxConcurrentChunkstopropTypes.The new public prop consumed in
processChunkQueueis not declared inDropzoneJS.propTypes, which is the idiomatic way to document the component's API for this codebase (v2 and v3 both forward it). APropTypes.numberentry (and optionally adefaultPropsvalue) would make the contract explicit and replace thethis.props.maxConcurrentChunks || 6fallback scattered in code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/index.js` around lines 464 - 466, DropzoneJS currently declares only id in DropzoneJS.propTypes but uses the new public prop maxConcurrentChunks inside processChunkQueue; add maxConcurrentChunks: PropTypes.number to DropzoneJS.propTypes and optionally add DropzoneJS.defaultProps = { maxConcurrentChunks: 6 } (or adjust the default there) so the component API is explicit and you can remove scattered this.props.maxConcurrentChunks || 6 fallbacks in processChunkQueue.
203-235: Unmount does not clear an in-progress poll interval when the file is still polling.
componentWillUnmountclearsthis._pollIntervaland aborts active XHRs, but once the critical issue above is addressed (multiple concurrent polls), unmount must iterate and clear all active polling intervals. Also notefiles.forEachat line 212 does not use thefilekey and the two branches of thefiles.length > 0conditional both callthis.destroy(this.dropzone)identically — theif/elsecan be collapsed.♻️ Simplification
- if (files.length > 0) { - // Cancel active uploads before destroying - files.forEach(file => { - this.dropzone.cancelUpload(file); - }); - - this.dropzone = this.destroy(this.dropzone); - } else { - this.dropzone = this.destroy(this.dropzone) - } + files.forEach(file => this.dropzone.cancelUpload(file)); + this.dropzone = this.destroy(this.dropzone);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/index.js` around lines 203 - 235, componentWillUnmount must clear every active polling timer and simplify the dropzone destroy branch: after aborting XHRs, iterate and clear all polling intervals (not just this._pollInterval) — e.g. if you maintain a collection like this._pollIntervals, this._pollIntervalMap, this.activePolls or per-file timers (e.g. file._pollInterval or file.pollInterval), call clearInterval for each and null them; also collapse the duplicate branch by always calling this.dropzone = this.destroy(this.dropzone) after canceling uploads (use this.dropzone.getActiveFiles() and files.forEach(file => this.dropzone.cancelUpload(file))). Ensure you still null/clear this._pollInterval and any per-file timer references so no timers remain after unmount while keeping existing XHR abort and activeXHRs.clear() logic.
46-65: Both observations are valid; consider adding a comment documenting the private API dependency.The code correctly handles the throttle logic, but two maintainability concerns remain:
Redundant
onChunkComplete()calls for non-chunked uploads: Thexhr.onloadandxhr.onerrorhooks (lines ~401, ~437) callonChunkComplete()unconditionally. Since only chunked uploads incrementchunksInFlight(inprocessChunkQueue), non-chunked uploads uselessly decrement from 0 and scan the queue. To fix: only register these throttle hooks when queuing a chunk, or add a flag toonChunkComplete()to skip processing for non-queued uploads.Undocumented Dropzone private API:
_uploadDatais a private Dropzone internal (leading underscore). The code is pinned todropzone@5.7.2, but any minor/patch upgrade could break this silently. Add a comment explaining the dependency on this internal method and why (e.g.,// Uses Dropzone's private _uploadData method (since v5.7.2) to intercept uploads for concurrency throttling).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/inputs/dropzone/index.js` around lines 46 - 65, The file intercepts Dropzone's private _uploadData in setupChunkThrottle() and unconditionally calls onChunkComplete() for all uploads; fix by (1) only attaching the throttle-related xhr.onload/xhr.onerror handlers or marking the upload as "throttled" when you push to chunkQueue in setupChunkThrottle() so processChunkQueue() and onChunkComplete() run only for chunked uploads (e.g., set a per-upload flag and check it in onChunkComplete() before decrementing chunksInFlight), and (2) add a short comment in setupChunkThrottle() documenting the dependency on Dropzone's private _uploadData (include version note like "Uses Dropzone's private _uploadData method (since v5.7.2) to intercept uploads for concurrency throttling") so future maintainers know this is a private API dependency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.gitignore:
- Around line 6-9: Remove the erroneous .gitignore entry "package.json.lock" (a
typo) and keep the correct "package-lock.json" entry instead; edit the
.gitignore to delete the "package.json.lock" line so only the valid npm lock
filename "package-lock.json" remains listed.
In `@src/components/inputs/dropzone/index.js`:
- Around line 400-407: The _completedBytes update in xhr.onload incorrectly
assumes every completed request equals dropzone.options.chunkSize (fallback
2000000), inflating progress for final chunk and for non-chunked uploads; change
the accumulation in xhr.onload to compute the actual bytes delivered for that
XHR (prefer using the current chunk's byte range: dataBlocks[i].dataBlock.end -
dataBlocks[i].dataBlock.start) or only add when the request is a chunked upload
(detect by checking formData.has('dzchunkindex') or the chunked path used by
setupChunkThrottle), and keep calling onChunkComplete() unchanged; update
references in xhr.onload to use the real size instead of chunkSize when
adjusting file._completedBytes and leave Math.min(file._completedBytes +
actualBytes, file.size) for clamping.
---
Outside diff comments:
In `@src/components/inputs/dropzone/index.js`:
- Around line 355-368: The upload progress handler
(this.dropzone.on('uploadprogress')) can produce NaN when file.size is 0 or
undefined; guard file.size by computing progress only when file.size > 0 and
otherwise set progress to 0 (or 100 if you consider completedBytes equals size)
before using it to set the preview width. Update the logic around effectiveBytes
/ file.size * 100 to check file.size and use a fallback progress value, and keep
references to file._completedBytes, effectiveBytes, progress and
file.previewElement so the element width assignment still uses a valid
percentage string.
- Around line 78-114: The async setInterval callback can overlap itself causing
duplicate fetches and racing clears; replace the setInterval-based polling
around this._pollInterval with a self-rescheduling async tick using setTimeout
(or add a per-file inFlight guard) so each tick awaits getAccessToken(),
fetch(statusUrl) and response.json() to completion before scheduling the next
call; preserve existing logic for attempts > maxAttempts, clearing
file._pollingActive, calling file._chunksUploadedDone(), and invoking
this.onUploadComplete / this.onError, and ensure this._pollInterval is
set/cleared consistently (or store the timeout id if you keep using a timer).
- Around line 67-115: The bug is that pollUploadStatus uses a single
this._pollInterval for all files, causing intervals to be overwritten and
leaked; change pollUploadStatus to attach the interval id to the specific file
(e.g., file._pollIntervalId) and maintain a container on the component (e.g.,
this._pollIntervals = new Set() or Map()) to track active intervals; when
creating the interval assign its id to file._pollIntervalId and add it to
this._pollIntervals, replace every clearInterval(this._pollInterval) with
clearInterval(file._pollIntervalId) plus removal from this._pollIntervals and
nulling file._pollIntervalId/file._pollingActive, and update
componentWillUnmount to iterate this._pollIntervals and clear them all so no
per-file polling can leak.
---
Nitpick comments:
In `@src/components/inputs/dropzone/__tests__/dropzone.test.js`:
- Around line 148-149: Remove or update the stale TDD comment "THIS TEST SHOULD
FAIL INITIALLY (demonstrating the bug)" in the dropzone.test.js file; find the
comment near the test block (in the Dropzone tests) and either delete it or
replace it with an accurate note stating the test now passes, ensuring comments
around the relevant test(s) reflect current behavior (e.g., in the describe/it
block for the Dropzone component).
- Around line 283-294: The test-suite-level DropzoneMock.mockImplementation in
the describe block leaks across tests because jest.clearAllMocks() doesn't reset
implementations; move the mock implementation into the beforeEach (so
DropzoneMock.mockImplementation(...) is called per-test) or add
DropzoneMock.mockReset() or DropzoneMock.mockRestore() in afterEach to clear the
implementation; update references to DropzoneMock.mockImplementation,
beforeEach, and afterEach in the test file so each test gets a fresh mock
implementation and no state bleeds between tests.
- Around line 191-240: The test is flaky because pollUploadStatus uses a
hard-coded 2000ms setInterval and the test only waits 2500ms; update the test to
use Jest fake timers and advance them instead or make pollUploadStatus accept a
configurable interval to speed up polling in tests. Fix by either (A) in the
test call jest.useFakeTimers(), call instance.pollUploadStatus(...) then await
Promise.resolve/flushPromises and run jest.advanceTimersByTimeAsync(2000) (or
advanceTimersByTime and then await flushPromises) before asserting, or (B)
refactor the DropzoneJS.pollUploadStatus signature to accept an optional
intervalMs (default 2000) and in the test call instance.pollUploadStatus(...,
10) so you can wait a small timeout; reference pollUploadStatus and the test
name test_dropzone_202_polling_complete_fires_success when making the change.
In `@src/components/inputs/dropzone/index.js`:
- Around line 464-466: DropzoneJS currently declares only id in
DropzoneJS.propTypes but uses the new public prop maxConcurrentChunks inside
processChunkQueue; add maxConcurrentChunks: PropTypes.number to
DropzoneJS.propTypes and optionally add DropzoneJS.defaultProps = {
maxConcurrentChunks: 6 } (or adjust the default there) so the component API is
explicit and you can remove scattered this.props.maxConcurrentChunks || 6
fallbacks in processChunkQueue.
- Around line 203-235: componentWillUnmount must clear every active polling
timer and simplify the dropzone destroy branch: after aborting XHRs, iterate and
clear all polling intervals (not just this._pollInterval) — e.g. if you maintain
a collection like this._pollIntervals, this._pollIntervalMap, this.activePolls
or per-file timers (e.g. file._pollInterval or file.pollInterval), call
clearInterval for each and null them; also collapse the duplicate branch by
always calling this.dropzone = this.destroy(this.dropzone) after canceling
uploads (use this.dropzone.getActiveFiles() and files.forEach(file =>
this.dropzone.cancelUpload(file))). Ensure you still null/clear
this._pollInterval and any per-file timer references so no timers remain after
unmount while keeping existing XHR abort and activeXHRs.clear() logic.
- Around line 46-65: The file intercepts Dropzone's private _uploadData in
setupChunkThrottle() and unconditionally calls onChunkComplete() for all
uploads; fix by (1) only attaching the throttle-related xhr.onload/xhr.onerror
handlers or marking the upload as "throttled" when you push to chunkQueue in
setupChunkThrottle() so processChunkQueue() and onChunkComplete() run only for
chunked uploads (e.g., set a per-upload flag and check it in onChunkComplete()
before decrementing chunksInFlight), and (2) add a short comment in
setupChunkThrottle() documenting the dependency on Dropzone's private
_uploadData (include version note like "Uses Dropzone's private _uploadData
method (since v5.7.2) to intercept uploads for concurrency throttling") so
future maintainers know this is a private API dependency.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 59cb56c6-5727-402a-b401-37c4121e4bd9
⛔ Files ignored due to path filters (1)
yarn.lockis excluded by!**/yarn.lock,!**/*.lock
📒 Files selected for processing (6)
.gitignorepackage.jsonsrc/components/inputs/dropzone/__tests__/dropzone.test.jssrc/components/inputs/dropzone/index.jssrc/components/inputs/upload-input-v2/index.jssrc/components/inputs/upload-input-v3/index.js
| package.json.lock | ||
| .codegraph | ||
| docs | ||
| package-lock.json |
There was a problem hiding this comment.
Remove the redundant and incorrectly named entry.
Line 6 contains package.json.lock, which is not the correct name for npm's lock file. The actual file is package-lock.json (added on line 9). Line 6 appears to be a typo and should be removed to avoid confusion.
🧹 Proposed fix to remove the redundant entry
.idea/
-package.json.lock
.codegraph
docs
package-lock.json📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| package.json.lock | |
| .codegraph | |
| docs | |
| package-lock.json | |
| .codegraph | |
| docs | |
| package-lock.json |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 6 - 9, Remove the erroneous .gitignore entry
"package.json.lock" (a typo) and keep the correct "package-lock.json" entry
instead; edit the .gitignore to delete the "package.json.lock" line so only the
valid npm lock filename "package-lock.json" remains listed.
| // Release a slot in the chunk queue for the next chunk | ||
| _this.onChunkComplete(); | ||
|
|
||
| // Track completed bytes for accurate progress (prevents oscillation) | ||
| const chunkSize = _this.dropzone?.options?.chunkSize || 2000000; | ||
| file._completedBytes = Math.min( | ||
| (file._completedBytes || 0) + chunkSize, file.size | ||
| ); |
There was a problem hiding this comment.
_completedBytes accounting assumes every chunk is exactly chunkSize.
In xhr.onload you unconditionally add chunkSize (hardcoded fallback 2000000) to _completedBytes for every completed XHR. This is wrong in two ways:
- The last chunk is almost always smaller than
chunkSize; the floor will jump past the real bytes delivered until clamped byMath.min(..., file.size). The clamp hides this for the final display value but the intermediate floor used inuploadprogressis inflated. xhr.onloadruns for every XHR in this component, including the non-chunked single-POST upload path (setupChunkThrottlebypasses the queue for those) and potentially the 202 status poll path if it went through this handler. For non-chunked uploads_completedBytesgets+2 MBadded for a single success, again hidden only by the clamp.
Prefer deriving the advance from the actual request body / dataBlocks[i].dataBlock.end - .start, or gate this accumulation to chunk uploads only (e.g., only when the sending formData carries dzchunkindex).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/inputs/dropzone/index.js` around lines 400 - 407, The
_completedBytes update in xhr.onload incorrectly assumes every completed request
equals dropzone.options.chunkSize (fallback 2000000), inflating progress for
final chunk and for non-chunked uploads; change the accumulation in xhr.onload
to compute the actual bytes delivered for that XHR (prefer using the current
chunk's byte range: dataBlocks[i].dataBlock.end - dataBlocks[i].dataBlock.start)
or only add when the request is a chunked upload (detect by checking
formData.has('dzchunkindex') or the chunked path used by setupChunkThrottle),
and keep calling onChunkComplete() unchanged; update references in xhr.onload to
use the real size instead of chunkSize when adjusting file._completedBytes and
leave Math.min(file._completedBytes + actualBytes, file.size) for clamping.
* fix(dropzone): fix upload async ux Signed-off-by: smarcet <smarcet@gmail.com> * v4.2.28-beta.1 * fix(dropzone): fix request flooding Signed-off-by: smarcet <smarcet@gmail.com> * v4.2.28-beta.2 * feat(dropzone): add batch chunking * v4.2.28-beta.3 * fix(dropzone): prevent progress bar from going backwards during chunked upload * v4.2.28-beta.4 * fix(dropzone): restore async polling UX fix and improve progress tracking Move _asyncProcessing flag before dropzoneOnLoad so chunksUploaded callback sees it (regression from Iteration 2 refactor). Replace _maxProgress guard with _completedBytes floor for accurate progress that never oscillates. Add tests for progress monotonicity. * v4.2.28-beta.5 --------- Signed-off-by: smarcet <smarcet@gmail.com>
ref: https://app.clickup.com/t/86b9gt7w3
Summary by CodeRabbit
New Features
Tests
Chores