Conversation
63bce24 to
c9a0ac3
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c1bbca7d1c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if offset > 0 { | ||
| openFlag |= os.O_APPEND |
There was a problem hiding this comment.
Verify partial-content status before resuming append
When a .part file exists, the code always switches to append mode and sends a Range request, but it never checks that the server actually honored the range (206 Partial Content). If the server replies with 200 OK (common when range is unsupported or stripped by intermediaries), the full file is appended to existing bytes and the final output is corrupted while still logged as successful. Guard append mode on a 206 response and fall back to truncating/restarting otherwise.
Useful? React with 👍 / 👎.
app/ratelimit.go
Outdated
| rl.lastRefill = now | ||
| } | ||
|
|
||
| if rl.tokens >= int64(n) { |
There was a problem hiding this comment.
Handle reads larger than configured rate bucket
The limiter compares rl.tokens against the full n from one Read, but tokens are capped to rl.rate, so any read where n > rl.rate can never satisfy the condition and wait loops forever. This can hang downloads at low limits (for example, a 32KiB read with -limit 5K). Split large waits into smaller token requests or clamp n before the check.
Useful? React with 👍 / 👎.
f7aa702 to
337a6b7
Compare
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 337a6b7cea
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/ratelimit.go
Outdated
| rl.lastRefill = now | ||
| } | ||
|
|
||
| if rl.tokens >= int64(n) { |
There was a problem hiding this comment.
Cap wait demand to avoid low-limit download deadlock
rateLimiter.wait blocks until tokens >= n, but tokens is capped at rl.rate during refill, so n can become unachievable. In real downloads, io.Copy may pass reads around 32KiB; with -limit below that size (for example 1K), this condition never becomes true and the transfer stalls indefinitely.
Useful? React with 👍 / 👎.
| if offset > 0 { | ||
| openFlag |= os.O_APPEND |
There was a problem hiding this comment.
Validate range responses before appending resume data
When a .part file exists, the code always opens the file in append mode, but it never checks whether the server actually honored the Range request. If a server responds with 200 OK and the full body instead of 206 Partial Content, the full file is appended to the partial data, producing a corrupted output file.
Useful? React with 👍 / 👎.
app/helpers.go
Outdated
| for i := range chunkSize - 1 { | ||
| start = i * (chunk + 1) |
There was a problem hiding this comment.
Bound chunk generation to resource length
getChunks always emits chunkSize ranges even when the file is smaller than that count (default is 5), which creates invalid ranges once start passes the end of the file (e.g. tiny files). Those invalid ranges lead to failing range requests and aborted chunked downloads for small resources.
Useful? React with 👍 / 👎.
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 69a1bfbb36
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| defer func() { _ = resp.Body.Close() }() | ||
|
|
||
| // if server didn't honor Range request, restart from scratch | ||
| if offset > 0 && resp.StatusCode != http.StatusPartialContent { |
There was a problem hiding this comment.
Reject non-success GET responses in single-file download
downloadSingle only gates on status code when offset > 0, so a fresh download (offset == 0) will accept and persist any HTTP response body, including 4xx/5xx error pages, and then rename it as a completed file. This can silently corrupt outputs when the HEAD request succeeded earlier but the subsequent GET fails (for example with expiring signed URLs or transient backend errors).
Useful? React with 👍 / 👎.
app/download.go
Outdated
| if cd, ok := resp.Header["Content-Disposition"]; ok { | ||
| _, params, err := mime.ParseMediaType(cd[0]) | ||
| if err == nil { | ||
| r.filename = params["filename"] |
There was a problem hiding this comment.
Sanitize Content-Disposition filenames before path join
The filename parsed from Content-Disposition is assigned directly and later joined into the destination path, which allows malicious values like absolute paths or ../ segments to escape outputDir and overwrite arbitrary user-writable files. Because this value comes from a remote server header, it should be normalized/restricted before use.
Useful? React with 👍 / 👎.
app/download.go
Outdated
|
|
||
| pd.add(r.filename, &downloaded, r.length) | ||
|
|
||
| fcontent := make([]byte, r.length) |
There was a problem hiding this comment.
Avoid buffering entire chunked file payload in memory
The chunked path preallocates a full-size []byte for the whole file and each chunk fetch also uses io.ReadAll, so memory usage scales roughly with full file size plus per-chunk buffers. Large downloads can therefore trigger excessive memory pressure or OOM instead of streaming efficiently to disk.
Useful? React with 👍 / 👎.
Concurrent download manager rewritten for Go 1.26+: - Chunked parallel downloads with byte-range requests - Progress bar for both chunked and single downloads - Bandwidth limiter, resume support, disk space checking - Structured logging with log/slog - golangci-lint v2, pre-commit hooks, GitHub Actions CI - Comprehensive test suite (37 tests, 68% coverage) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 169b941d32
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/download.go
Outdated
| byteParts, err := c.fetch(r.url, chunkPair, &downloaded) | ||
| if err != nil { | ||
| slog.Error("chunk download failed", logKeyURL, r.url, "part", i, logKeyError, err) | ||
| fetchErr.Store(err) |
There was a problem hiding this comment.
Avoid mixed-type stores in atomic error slot
This atomic.Value is written from multiple error paths in the chunk goroutines, but each path can produce a different concrete error type (*url.Error, *os.PathError, errors.errorString, etc.). atomic.Value requires all stored values to have the same concrete type, so when two chunks fail differently the second Store panics (sync/atomic: store of inconsistently typed value) and crashes the download process.
Useful? React with 👍 / 👎.
| if request > rl.rate { | ||
| request = rl.rate |
There was a problem hiding this comment.
Reject negative rate limits before token waiting
A negative limit (for example -limit -1M) is accepted and propagated into the limiter, but wait assumes rl.rate is non-negative. With rl.rate < 0, request becomes negative here and remaining -= request increases remaining, so the loop never terminates and reads hang indefinitely. Validate parseRate output (or guard in newRateLimiter) to forbid negative rates.
Useful? React with 👍 / 👎.
- Replace atomic.Value with mutex-protected error to prevent panic from inconsistent concrete types stored by concurrent goroutines - Validate negative rate limits in parseRate() to prevent infinite loop - Extract chunk array indices to local vars to satisfy gosec G602 - Remove unused rateLimit field from CLIApplication - Add nil check for os.Stdin.Stat() error in isPiped() - Cap rateLimitedReader buffer to rate limit to avoid bursts - Add return after WriteAt error in chunk download goroutine - Add HTTP status validation in fetch() - Update usage text formatting Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f01dfc14d5
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/download.go
Outdated
| ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) | ||
| defer cancel() |
There was a problem hiding this comment.
Remove hard 30s deadline from chunk downloads
Each chunk request is created with context.WithTimeout(..., 30*time.Second), and that context remains active while io.ReadAll consumes the body, so any chunk that takes longer than 30s is canceled even if the transfer is healthy. This fails large files on normal/slow links and also fails quickly when -limit is set low, because the per-chunk read duration often exceeds 30s.
Useful? React with 👍 / 👎.
app/helpers.go
Outdated
| if num < 0 { | ||
| return 0, fmt.Errorf("rate must be non-negative: %s", s) | ||
| } | ||
|
|
||
| return int64(num * float64(multiplier)), nil |
There was a problem hiding this comment.
Reject overflowed limits before converting to int64
The parsed numeric rate is converted directly with int64(num * float64(multiplier)) without checking upper bounds, so very large positive inputs (for example -limit 1e20G) can overflow to a negative rate. That negative value is then used by the limiter, where wait can loop forever because remaining increases when subtracting a negative request, causing downloads to hang instead of returning an input-validation error.
Useful? React with 👍 / 👎.
app/download.go
Outdated
|
|
||
| pd.add(r.filename, &downloaded, r.length) | ||
|
|
||
| f, err := os.OpenFile(partPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, permFile) |
There was a problem hiding this comment.
Avoid truncating .part files in chunked mode
In the chunked path, the .part file is always opened with os.O_TRUNC, which discards any previously downloaded bytes before new range requests start. This means interrupted downloads for servers that support Accept-Ranges cannot actually resume and are forced to restart from zero, wasting time and bandwidth.
Useful? React with 👍 / 👎.
- Replace hard 30s timeout in fetch() with dynamic timeout based on chunk size and rate limit (3x expected time, capped at 30min) - Reject rate values that overflow int64 after multiplication - Add test cases for overflow rate values Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 705cb6f0f7
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/helpers.go
Outdated
| if num < 0 { | ||
| return 0, fmt.Errorf("rate must be non-negative: %s", s) | ||
| } | ||
|
|
||
| result := num * float64(multiplier) |
There was a problem hiding this comment.
Reject NaN bandwidth limits during rate parsing
parseRate only checks num < 0, but strconv.ParseFloat accepts NaN (case-insensitive), which bypasses both the negative and overflow guards here. With input like -limit NaN, this function returns an invalid negative int64 rate, and the limiter is initialized with that value, so bandwidth limiting behavior becomes incorrect instead of failing fast with a validation error.
Useful? React with 👍 / 👎.
app/app.go
Outdated
| for _, r := range resources[:i+1] { | ||
| remaining -= r.length |
There was a problem hiding this comment.
Base remaining disk checks on completed resources
The post-download disk-space check subtracts resources[:i+1] after each <-done, but completion order is nondeterministic, so this slice does not represent the files that actually finished. When smaller files finish before larger earlier-indexed files, remaining is miscomputed and warnings can be skipped or emitted at the wrong time; this should be decremented by the specific resource that completed.
Useful? React with 👍 / 👎.
- Reject NaN and Inf values in parseRate() alongside negative check - Fix nondeterministic disk space check by tracking actual completed download sizes via channel instead of index-based slice assumption - Add test cases for NaN and Inf rate inputs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c048c788dc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/download.go
Outdated
| } | ||
|
|
||
| func (c *CLIApplication) download(r *resource, done chan int64, pd *progressDisplay) { | ||
| defer func() { done <- r.length }() |
There was a problem hiding this comment.
Fail the run when any download goroutine errors
download always executes done <- r.length via defer, even on early returns after create/fetch/write/finalize failures, and Run treats each done signal as successful completion and returns nil; this makes the CLI exit 0 and print completion even when files were not actually downloaded (for example, permission-denied on output dir or chunk fetch failures). Propagate per-file errors back to Run and return a non-nil error when any download fails.
Useful? React with 👍 / 👎.
app/diskspace.go
Outdated
| var stat syscall.Statfs_t | ||
| if err := syscall.Statfs(path, &stat); err != nil { |
There was a problem hiding this comment.
Add non-Unix implementation for disk space checks
This file unconditionally uses syscall.Statfs_t/syscall.Statfs without build tags, which compiles on Unix targets but not on Windows where those syscall symbols are absent; as written, a Windows build of the module will fail at compile time. Split the implementation by platform (or add an alternative for Windows) so cross-platform builds do not break.
Useful? React with 👍 / 👎.
- Send actual completed size (0 on failure) through done channel so Run returns non-zero exit code when any download fails - Split diskspace into platform files with build tags: unix uses syscall.Statfs, windows stub returns MaxInt64 (skips check) - Move unix-specific disk space test to build-tagged file Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 729d661ac2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/app.go
Outdated
| if size == 0 { | ||
| failCount++ |
There was a problem hiding this comment.
Don't treat zero-byte files as failed downloads
The failure check uses size == 0, but successful downloads of legitimate empty files also produce 0 because download() reports completed as r.length; this increments failCount and makes Run() return an error even when all downloads succeeded. Any URL serving a 0-byte artifact will be misreported as a failed transfer.
Useful? React with 👍 / 👎.
app/download.go
Outdated
| } | ||
| } | ||
|
|
||
| completed = r.length |
There was a problem hiding this comment.
Report non-negative completion for unknown-length downloads
download() sets completed to r.length, but r.length is -1 when the HEAD response has no Content-Length. A successful unknown-length download then sends a negative size through done, which corrupts completedSize/remaining accounting in Run() and can trigger misleading disk-space warnings despite successful transfers.
Useful? React with 👍 / 👎.
- Use downloadResult struct (size + ok) instead of raw int64 to distinguish failed downloads from zero-byte files - Clamp negative r.length to 0 for disk space accounting when Content-Length is unknown (-1) - Handle \r line endings in pipe input (e.g. pbpaste on macOS) - Trim whitespace from piped URLs and skip blank lines Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: de45a875a2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| outputPath := filepath.Join(c.outputDir, r.filename) | ||
| partPath := outputPath + ".part" |
There was a problem hiding this comment.
Prevent output-path collisions between concurrent downloads
This computes the destination from r.filename alone, so two different URLs that resolve to the same basename (or same Content-Disposition filename) will write to the same .part/final path concurrently. In that case one goroutine can truncate or rename the file while the other is still writing, which leads to corrupted output or a spurious failure for one of the downloads; this is reproducible with inputs like https://a.example/file.zip and https://b.example/file.zip in the same run.
Useful? React with 👍 / 👎.
app/download.go
Outdated
| basename := filepath.Base(url) | ||
| r.filename = basename |
There was a problem hiding this comment.
Derive fallback filename from URL path only
Using filepath.Base(url) on the full URL string keeps query text in the fallback filename (for example, signed URLs become names like file.zip?X-Amz-...). That creates invalid filenames on Windows (? is not allowed) and causes otherwise valid downloads to fail at file creation time when Content-Disposition is absent, which is common for object-storage links.
Useful? React with 👍 / 👎.
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 713473fda6
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/download.go
Outdated
| } | ||
|
|
||
| func (c *CLIApplication) downloadSingle(r *resource, outputPath, partPath string, pd *progressDisplay) error { | ||
| ctx, cancel := context.WithTimeout(context.Background(), 30*time.Minute) |
There was a problem hiding this comment.
Remove fixed 30-minute deadline from download requests
Using a hard 30*time.Minute context timeout here causes valid downloads to fail whenever transfer time exceeds 30 minutes (for example, large files or low -limit settings), even if data is still flowing. The chunked path has the same issue because fetch also caps computed timeouts at 30 minutes, so long-running downloads are systematically aborted with deadline errors.
Useful? React with 👍 / 👎.
app/helpers.go
Outdated
| if count > 0 { | ||
| ext := filepath.Ext(r.filename) | ||
| base := strings.TrimSuffix(r.filename, ext) | ||
| r.filename = fmt.Sprintf("%s_%d%s", base, count, ext) |
There was a problem hiding this comment.
Ensure deduplicated filenames stay globally unique
This rename rule can still generate collisions when an original filename already contains a generated suffix (e.g., file.zip, file_1.zip, file.zip ends with two file_1.zip entries). That makes multiple downloads target the same output and .part file, which can overwrite or corrupt artifacts. The deduplication step needs to verify each generated name against all previously assigned names, not just duplicates of the same source name.
Useful? React with 👍 / 👎.
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 684edf19d9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/download.go
Outdated
| timeout := 30 * time.Second | ||
| if c.limiter != nil && c.limiter.rate > 0 { | ||
| expected := time.Duration(chunkBytes/c.limiter.rate+1) * time.Second |
There was a problem hiding this comment.
Remove fixed 30s timeout for unthrottled chunk fetches
This code keeps a hard 30s timeout whenever no rate limiter is configured, so large chunks on normal-but-slower links will fail with context deadline exceeded even though the transfer is healthy. With the default 5-way split, each chunk can be hundreds of MB, which commonly exceeds 30s in production networks; this causes chunked downloads to abort unnecessarily.
Useful? React with 👍 / 👎.
app/download.go
Outdated
|
|
||
| reader = &countingReader{reader: reader, counter: downloaded} | ||
|
|
||
| b, err := io.ReadAll(reader) |
There was a problem hiding this comment.
Stream chunk data directly to disk
Reading each chunk with io.ReadAll buffers the full chunk in memory before writing, so memory usage scales with file size and parallelism. For large files, each goroutine can allocate gigabytes (default chunking is 5 concurrent parts), which can trigger OOMs or severe GC pressure and break downloads under realistic workloads.
Useful? React with 👍 / 👎.
| if len(c.URLS) == 0 { | ||
| return errEmptyPipe | ||
| } |
There was a problem hiding this comment.
Propagate scanner errors when parsing piped URLs
After scanning stdin, the function checks only whether any URLs were collected and never inspects scanner.Err(), so read failures (including token-too-long cases) are silently dropped. In those cases the command can proceed with an incomplete URL set or misreport empty pipe, making failures hard to detect and causing missing downloads.
Useful? React with 👍 / 👎.
- Get terminal width via ioctl on unix, fallback to 80 on windows - Truncate long filenames with ellipsis to prevent line wrapping - Preserve right-aligned filename formatting within available space Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Thread context.Context through entire download pipeline for cancellation - Add signal.NotifyContext for graceful Ctrl+C handling - Cancel remaining chunks immediately when one chunk fails - Report size as 0 for failed downloads (fixes disk space accounting) - Start progress display before launching download goroutines - Validate chunk count (1-64) to prevent goroutine explosion - Check existing files on disk in deduplicateFilenames to prevent overwrites - Add overflow guard in diskspace_unix.go before uint64 multiplication Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c360f69f22
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
app/download.go
Outdated
|
|
||
| pd.add(r.filename, &downloaded, r.length) | ||
|
|
||
| f, err := os.OpenFile(partPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, permFile) |
There was a problem hiding this comment.
Preserve existing .part data in chunked downloads
Chunked mode always opens the part file with os.O_TRUNC, so any interrupted download that previously wrote <name>.part is reset to zero bytes on restart. Because most servers that support ranges go through this branch, resume effectively does not work for the common case and large downloads restart from scratch after interruption. The chunked path should retain existing partial data and only fetch missing ranges instead of unconditionally truncating.
Useful? React with 👍 / 👎.
app/download.go
Outdated
| if fetchErr != nil { | ||
| slog.Error("download aborted due to chunk failure", logKeyURL, r.url, logKeyError, fetchErr) | ||
| _ = os.Remove(partPath) | ||
|
|
There was a problem hiding this comment.
Retry single-stream when chunked transfer aborts
After any chunk error, the code deletes the part file and returns failure without attempting a non-range GET. This makes downloads fail for servers/CDNs that advertise Accept-Ranges on HEAD but do not honor range GET requests consistently, even though a regular single-stream download would succeed. Add a fallback to downloadSingle before returning so these resources remain downloadable.
Useful? React with 👍 / 👎.
- Extract downloadChunked method from download for clarity - Fall back to single-stream download when chunked transfer fails (handles servers that advertise Accept-Ranges but reject range GETs) - Only truncate .part file when size doesn't match expected length, preserving previously downloaded data on restart - Add test for chunked-to-single fallback scenario Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix duplicate progress entry on chunked-to-single fallback by managing the progress counter in download() and passing it to sub-methods - Fix fetchToFile accepting HTTP 200 for range requests which would write entire body at wrong offset, corrupting the file (now only accepts 206) - Fix concurrent WriteAt on shared fd (unsafe on Windows) by opening a separate file descriptor per chunk goroutine - Remove unused progressReader type (dead code) - Fix test server to handle open-ended range headers (bytes=N-) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…constants Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Accept-RangesTest plan
make test— all 35 tests pass with race detectormake lint— 0 lint issuesleech -verbose -limit 1M https://example.com/large-file.zipcat urls.txt | leech🤖 Generated with Claude Code