-
Notifications
You must be signed in to change notification settings - Fork 22
feat: perf-test for python #813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,43 @@ | ||
# AWS Encryption SDK Python Benchmark | ||
|
||
Performance testing suite for the AWS Encryption SDK Python implementation. | ||
|
||
## Quick Start | ||
|
||
```bash | ||
# Install dependencies | ||
pip install -r requirements.txt | ||
|
||
# Run benchmark | ||
python esdk_benchmark.py | ||
|
||
# Quick test (reduced iterations) | ||
python esdk_benchmark.py --quick | ||
``` | ||
|
||
## Options | ||
|
||
- `--config` - Path to test configuration file (default: `../../config/test-scenarios.yaml`) | ||
- `--output` - Path to output results file (default: `../../results/raw-data/python_results.json`) | ||
- `--quick` - Run with reduced iterations for faster testing | ||
|
||
## Configuration | ||
|
||
Edit `../../config/test-scenarios.yaml` for test parameters: | ||
|
||
- Data sizes (small/medium/large) | ||
- Iterations and concurrency levels | ||
|
||
## Test Types | ||
|
||
- **Throughput** - Measures encryption/decryption operations per second | ||
- **Memory** - Tracks memory usage and allocations during operations | ||
- **Concurrency** - Tests performance under concurrent load | ||
|
||
## Output | ||
|
||
Results saved as JSON to `../../results/raw-data/python_results.json` with: | ||
|
||
- Performance metrics (ops/sec, latency percentiles) | ||
- Memory usage (peak, average, allocations, input data to memory ratio) | ||
- System information (CPU, memory, Python version) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,93 @@ | ||
#!/usr/bin/env python3 | ||
""" | ||
Core benchmark module for ESDK Python benchmark | ||
""" | ||
|
||
import logging | ||
import multiprocessing | ||
import secrets | ||
import sys | ||
|
||
import psutil | ||
from aws_cryptographic_material_providers.mpl import AwsCryptographicMaterialProviders | ||
from aws_cryptographic_material_providers.mpl.config import MaterialProvidersConfig | ||
from aws_cryptographic_material_providers.mpl.models import ( | ||
AesWrappingAlg, | ||
CreateRawAesKeyringInput, | ||
) | ||
from aws_encryption_sdk import EncryptionSDKClient, CommitmentPolicy | ||
from config import load_config | ||
|
||
|
||
class ESDKBenchmark: | ||
"""Main benchmark class for ESDK Python performance testing""" | ||
|
||
def __init__(self, config_path: str = "../../config/test-scenarios.yaml"): | ||
self.config = load_config(config_path) | ||
self.results = [] | ||
|
||
self._setup_logging() | ||
self._setup_esdk() | ||
self._setup_system_info() | ||
|
||
def _setup_system_info(self): | ||
"""Initialize system information""" | ||
self.cpu_count = multiprocessing.cpu_count() | ||
self.total_memory_gb = psutil.virtual_memory().total / (1024**3) | ||
|
||
self.logger.info( | ||
f"Initialized ESDK Benchmark - CPU cores: {self.cpu_count}, " | ||
f"Memory: {self.total_memory_gb:.1f}GB" | ||
) | ||
|
||
def _setup_logging(self): | ||
"""Setup logging configuration""" | ||
logging.basicConfig( | ||
level=logging.INFO, | ||
format="%(message)s", | ||
handlers=[logging.StreamHandler(sys.stdout)], | ||
) | ||
# Suppress AWS SDK logging | ||
logging.getLogger("aws_encryption_sdk").setLevel(logging.WARNING) | ||
logging.getLogger("botocore").setLevel(logging.WARNING) | ||
logging.getLogger("boto3").setLevel(logging.WARNING) | ||
|
||
self.logger = logging.getLogger(__name__) | ||
|
||
def _setup_esdk(self): | ||
"""Initialize ESDK client and raw AES keyring""" | ||
try: | ||
self.keyring = self._create_keyring() | ||
self.esdk_client = self._create_client() | ||
self.logger.info("ESDK client initialized successfully") | ||
except Exception as e: | ||
self.logger.error(f"Failed to initialize ESDK: {e}") | ||
raise | ||
|
||
def _create_keyring(self): | ||
"""Create raw AES keyring""" | ||
static_key = secrets.token_bytes(32) | ||
mat_prov = AwsCryptographicMaterialProviders(config=MaterialProvidersConfig()) | ||
|
||
keyring_input = CreateRawAesKeyringInput( | ||
key_namespace="esdk-performance-test", | ||
key_name="test-aes-256-key", | ||
wrapping_key=static_key, | ||
wrapping_alg=AesWrappingAlg.ALG_AES256_GCM_IV12_TAG16, | ||
) | ||
|
||
return mat_prov.create_raw_aes_keyring(input=keyring_input) | ||
|
||
def _create_client(self): | ||
"""Create ESDK client""" | ||
return EncryptionSDKClient( | ||
commitment_policy=CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT | ||
) | ||
|
||
def should_run_test_type(self, test_type: str, is_quick_mode: bool = False) -> bool: | ||
"""Determine if a test type should be run based on configuration""" | ||
if is_quick_mode: | ||
quick_config = self.config.get("quick_config") | ||
if quick_config and "test_types" in quick_config: | ||
return test_type in quick_config["test_types"] | ||
return True |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
#!/usr/bin/env python3 | ||
""" | ||
Configuration module for ESDK Python benchmark | ||
""" | ||
|
||
import yaml | ||
|
||
|
||
def load_config(config_path: str): | ||
"""Load test configuration from YAML file""" | ||
try: | ||
with open(config_path, "r") as f: | ||
return yaml.safe_load(f) | ||
except FileNotFoundError: | ||
raise FileNotFoundError(f"Config file not found: {config_path}") | ||
except Exception as e: | ||
raise RuntimeError(f"Failed to parse config file: {e}") |
89 changes: 89 additions & 0 deletions
89
esdk-performance-testing/benchmarks/python/esdk_benchmark.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
#!/usr/bin/env python3 | ||
""" | ||
ESDK Performance Benchmark Suite - Python Implementation | ||
|
||
This module provides comprehensive performance testing for the AWS Encryption SDK (ESDK) | ||
Python runtime, measuring throughput, latency, memory usage, and scalability. | ||
""" | ||
|
||
import sys | ||
import argparse | ||
from benchmark import ESDKBenchmark | ||
from tests import run_all_benchmarks | ||
|
||
|
||
def main(): | ||
"""Main entry point for the benchmark suite""" | ||
args = _parse_arguments() | ||
|
||
try: | ||
benchmark = ESDKBenchmark(config_path=args.config) | ||
|
||
if args.quick: | ||
_adjust_config_for_quick_mode(benchmark) | ||
|
||
results = run_all_benchmarks(benchmark, is_quick_mode=args.quick) | ||
|
||
_save_and_summarize_results(results, args.output) | ||
|
||
except Exception as e: | ||
print(f"Benchmark failed: {e}") | ||
sys.exit(1) | ||
|
||
|
||
def _parse_arguments(): | ||
"""Parse command line arguments""" | ||
parser = argparse.ArgumentParser(description="ESDK Python Performance Benchmark") | ||
parser.add_argument( | ||
"--config", | ||
default="../../config/test-scenarios.yaml", | ||
help="Path to test configuration file", | ||
) | ||
parser.add_argument( | ||
"--output", | ||
default="../../results/raw-data/python_results.json", | ||
help="Path to output results file", | ||
) | ||
parser.add_argument( | ||
"--quick", action="store_true", help="Run quick test with reduced iterations" | ||
) | ||
return parser.parse_args() | ||
|
||
|
||
def _adjust_config_for_quick_mode(benchmark): | ||
"""Adjust benchmark configuration for quick mode""" | ||
quick_config = benchmark.config.get("quick_config") | ||
if not quick_config: | ||
raise RuntimeError( | ||
"Quick mode requested but no quick_config found in config file" | ||
) | ||
|
||
benchmark.config["iterations"]["measurement"] = quick_config["iterations"][ | ||
"measurement" | ||
] | ||
benchmark.config["iterations"]["warmup"] = quick_config["iterations"]["warmup"] | ||
benchmark.config["data_sizes"]["small"] = quick_config["data_sizes"]["small"] | ||
benchmark.config["data_sizes"]["medium"] = [] | ||
benchmark.config["data_sizes"]["large"] = [] | ||
benchmark.config["concurrency_levels"] = quick_config["concurrency_levels"] | ||
|
||
|
||
def _save_and_summarize_results(results, output_path): | ||
"""Save results and print summary""" | ||
from results import save_results | ||
|
||
save_results(results, output_path) | ||
|
||
print("\n=== ESDK Python Benchmark Summary ===") | ||
print(f"Total tests completed: {len(results)}") | ||
print(f"Results saved to: {output_path}") | ||
|
||
if results: | ||
throughput_results = [r for r in results if r.test_name == "throughput"] | ||
if throughput_results: | ||
max_throughput = max(r.ops_per_second for r in throughput_results) | ||
print("Maximum throughput: {:.2f} ops/sec".format(max_throughput)) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
15 changes: 15 additions & 0 deletions
15
esdk-performance-testing/benchmarks/python/requirements.txt
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
# ESDK Performance Testing - Python Dependencies | ||
|
||
# Core dependencies | ||
pyyaml>=6.0 | ||
psutil>=5.9.0 | ||
|
||
# Performance measurement | ||
memory-profiler>=0.61.0 | ||
|
||
# Progress and logging | ||
tqdm>=4.65.0 | ||
|
||
# AWS and ESDK dependencies | ||
aws-encryption-sdk>=4.0.1 | ||
aws-cryptographic-material-providers>=1.11.0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
#!/usr/bin/env python3 | ||
""" | ||
Results module for ESDK Python benchmark | ||
""" | ||
|
||
import json | ||
import multiprocessing | ||
import sys | ||
import time | ||
from dataclasses import dataclass, asdict | ||
from pathlib import Path | ||
from typing import List | ||
|
||
import psutil | ||
|
||
|
||
@dataclass | ||
class BenchmarkResult: | ||
"""Container for benchmark results""" | ||
|
||
test_name: str | ||
language: str = "python" | ||
data_size: int = 0 | ||
concurrency: int = 1 | ||
encrypt_latency_ms: float = 0.0 | ||
decrypt_latency_ms: float = 0.0 | ||
end_to_end_latency_ms: float = 0.0 | ||
ops_per_second: float = 0.0 | ||
bytes_per_second: float = 0.0 | ||
peak_memory_mb: float = 0.0 | ||
memory_efficiency_ratio: float = 0.0 | ||
p50_latency: float = 0.0 | ||
p95_latency: float = 0.0 | ||
p99_latency: float = 0.0 | ||
timestamp: str = "" | ||
python_version: str = "" | ||
cpu_count: int = 0 | ||
total_memory_gb: float = 0.0 | ||
|
||
def __post_init__(self): | ||
self.timestamp = self.timestamp or time.strftime("%Y-%m-%d %H:%M:%S") | ||
self.python_version = self.python_version or self._get_python_version() | ||
self.cpu_count = self.cpu_count or multiprocessing.cpu_count() | ||
self.total_memory_gb = self.total_memory_gb or self._get_total_memory() | ||
|
||
def _get_python_version(self): | ||
"""Get Python version string""" | ||
return ( | ||
f"{sys.version_info.major}.{sys.version_info.minor}." | ||
f"{sys.version_info.micro}" | ||
) | ||
|
||
def _get_total_memory(self): | ||
"""Get total system memory in GB""" | ||
return psutil.virtual_memory().total / (1024**3) | ||
|
||
|
||
def save_results(results: List[BenchmarkResult], output_path: str): | ||
"""Save benchmark results to JSON file""" | ||
output_file = Path(output_path) | ||
output_file.parent.mkdir(parents=True, exist_ok=True) | ||
|
||
metadata = _create_metadata(results) | ||
results_data = { | ||
"metadata": metadata, | ||
"results": [asdict(result) for result in results], | ||
} | ||
|
||
with open(output_file, "w") as f: | ||
json.dump(results_data, f, indent=2) | ||
|
||
|
||
def _create_metadata(results: List[BenchmarkResult]): | ||
"""Create metadata for results file""" | ||
metadata = { | ||
"language": "python", | ||
"timestamp": time.strftime("%Y-%m-%d %H:%M:%S"), | ||
"total_tests": len(results), | ||
} | ||
|
||
if results: | ||
metadata.update( | ||
{ | ||
"python_version": results[0].python_version, | ||
"cpu_count": results[0].cpu_count, | ||
"total_memory_gb": results[0].total_memory_gb, | ||
} | ||
) | ||
|
||
return metadata |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need all these dependencies? A fair amount seem unused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was relying on linter and formatter to fix this, but apparently it didn't. Fixed it now.