This suite helps you identify optimal S3 performance configurations by testing various object sizes and concurrency levels, then visualizing the results.
-
Warp binary: Download from github.com/minio/warp
# macOS example wget https://github.com/minio/warp/releases/latest/download/warp_darwin_amd64 -O warp chmod +x warp -
Python 3 with matplotlib and numpy:
pip3 install matplotlib numpy
-
zstd (for decompressing results):
brew install zstd
-
AWS Credentials: Set environment variables:
export AWS_ACCESS_KEY_ID="your-access-key" export AWS_SECRET_ACCESS_KEY="your-secret-key"
Edit run_warp.sh to configure:
HOST: Your S3 endpointBUCKET: Bucket name for testingDUR: Test duration per configuration (default: 1m)SIZES: Array of object sizes to testCONCURRENCIES: Array of concurrency levels to test
./run_warp.shThis will:
- Test all combinations of object sizes (1KiB to 128MiB) and concurrency levels (1 to 2048)
- Save results to a timestamped directory:
results_YYYYMMDD_HHMMSS/ - Create
.logfiles for each test configuration - Show progress:
[current/total] Testing: Size=4MiB, Concurrency=256
Note: Full test suite with 10 sizes × 8 concurrency levels = 80 tests
- At 1 minute per test: ~1.5 hours total
- Consider reducing test duration or number of combinations for faster results
To run a faster test, edit run_warp.sh:
# Smaller test matrix
SIZES=(1KiB 256KiB 4MiB 64MiB)
CONCURRENCIES=(1 64 512 2048)
DUR="30s"After tests complete:
python3 analyze_results.py results_YYYYMMDD_HHMMSS/This generates:
-
Charts (saved to
results_*/charts/):throughput_heatmap.png- Performance across all configurationsthroughput_by_size.png- How throughput varies with object sizethroughput_by_concurrency.png- How throughput scales with concurrencyops_by_size.png- Operations per second analysislatency_analysis.png- Latency patternsoptimal_configurations.png- Top 10 best configurations
-
Summary Report (
performance_summary.txt):- Best overall configuration
- Best configuration per object size
- Performance breakdown analysis
- Identification of performance degradation points
- Throughput (MB/s): Data transfer rate - higher is better
- Operations/sec: Number of operations completed - higher is better for small objects
- Latency (ms): Response time - lower is better
- Optimal Configuration: The size/concurrency combination with highest throughput
- Performance Scaling: How performance improves with concurrency
- Breakdown Point: Where adding more concurrency hurts performance
- Sweet Spots: Configurations that balance throughput and resource usage
- Small objects (< 1MiB): Higher concurrency often helps, ops/sec matters more
- Large objects (> 10MiB): Throughput saturates at lower concurrency
- Breakdown: Usually occurs when server/network becomes saturated
# 1. Configure credentials
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
# 2. Run tests (go get coffee ☕)
./run_warp.sh
# 3. Analyze results
python3 analyze_results.py results_20251111_143000/
# 4. View charts
open results_20251111_143000/charts/
# 5. Read summary
cat results_20251111_143000/charts/performance_summary.txtCurrently tests PUT operations. To test GET:
# In run_warp.sh, change:
./warp put \
# to:
./warp get \Edit analyze_results.py to parse additional warp output fields.
# For very small objects:
SIZES=(100B 1KiB 10KiB 100KiB)
# For very large objects:
SIZES=(10MiB 50MiB 100MiB 500MiB 1GiB)- Check that
.logfiles exist in results directory - Verify warp binary is executable and in correct location
- Check AWS credentials are set
- Install missing Python packages:
pip3 install matplotlib numpy - Check Python version:
python3 --version(need 3.7+)
- Check network connectivity to S3 endpoint
- Verify bucket exists or uncomment bucket creation line in script
- Check AWS credentials have proper permissions
- Review individual test logs for specific errors
- Ensure no other heavy processes are running
- Check network bandwidth isn't saturated
- Verify S3 service isn't rate-limiting
- Consider multiple test runs for consistency
- Start small: Run a quick test with fewer combinations first
- Monitor resources: Watch CPU, memory, network during tests
- Multiple runs: Run critical configurations multiple times for accuracy
- Document findings: Note any environmental factors affecting results
- Baseline comparison: Save results over time to track performance changes
This is a testing suite wrapper around MinIO warp. See MinIO warp license.