-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Description
As identified in PR #32 review, we should add performance benchmarks to ensure the refactored architecture actually improves performance and to track performance over time.
Proposed Benchmark Suite
1. Architecture Benchmarks
Compare old monolithic client vs new mixin-based architecture:
- Client initialization time
- Method resolution time
- Memory usage comparison
2. Import Time Benchmarks
# Measure import times
import time
start = time.perf_counter()
from esologs import Client
end = time.perf_counter()
print(f'Import time: {end - start:.4f}s')3. API Call Benchmarks
- Single API call performance
- Concurrent API calls
- Large response handling
- Token refresh performance
4. Memory Usage Profiling
import tracemalloc
tracemalloc.start()
# Create client and make calls
current, peak = tracemalloc.get_traced_memory()
print(f'Current memory: {current / 1024 / 1024:.2f} MB')Implementation Tools
pytest-benchmarkfor benchmark testsmemory_profilerfor memory analysispy-spyfor profiling- GitHub Actions integration for tracking over time
Benchmark Scenarios
- Minimal usage (auth only)
- Single endpoint usage
- Full API usage
- Concurrent operations
- Large result set handling
Success Metrics
- Client initialization < 100ms
- Method resolution < 1ms
- Memory overhead < 50MB for basic usage
- No memory leaks during extended usage
CI Integration
- Run benchmarks on PR submissions
- Compare against baseline
- Fail if regression > 10%
- Generate performance reports
References
- PR Release v0.2.0b1 - First Beta Release #32 review feedback
- pytest-benchmark documentation
Metadata
Metadata
Assignees
Labels
No labels