High-performance parallel file downloader to cloud storage.
- Parallel chunked downloads using HTTP range requests
- Direct streaming to cloud storage (no local disk required)
- Automatic resume for interrupted downloads
- Storage-agnostic via gocloud.dev/blob (GCS, S3, Azure, etc.)
- Progress reporting
- Configurable chunk size and worker count
go install github.com/username/slurp/cmd/slurp@latest# Download a large file to GCS
slurp --url "https://example.com/file.tar.gz" \
--bucket "gs://my-bucket" \
--object "downloads/file.tar.gz"
# Resume automatically if interrupted
slurp --url "https://example.com/file.tar.gz" \
--bucket "gs://my-bucket" \
--object "downloads/file.tar.gz"- HEAD request to get file size and check range request support
- Split file into chunks (default 256MB)
- Download chunks in parallel using worker goroutines
- Stream each chunk directly to cloud storage
- Write manifest on completion for later retrieval
Use the sharded package to read chunked files back:
f, _ := sharded.Read(ctx, "gs://my-bucket", "downloads/file.tar.gz")
defer f.Close()
io.Copy(dst, f) // Streams all chunks in orderMIT