A comprehensive AWS CDK construct library for deploying and running distributed K6 load tests on AWS infrastructure. This solution provides automated infrastructure provisioning, distributed test execution, real-time monitoring, and comprehensive observability for performance testing at scale.
- ποΈ Infrastructure as Code: Complete AWS infrastructure provisioning using CDK
- π Distributed Load Testing: Run K6 tests across multiple instances for scalable performance testing
- π Real-time Monitoring: CloudWatch dashboards with live metrics and performance insights
- π§ Auto-scaling: Dynamic EC2 instance scaling based on test parallelism requirements
- π OpenTelemetry Integration: Comprehensive observability with OpenTelemetry Collector
- π Secure VCS Integration: Support for GitHub and GitLab with secure token-based authentication
- β‘ ARM64 Optimized: Cost-effective ARM-based EC2 instances for better price-performance
- ποΈ Configurable Parameters: Flexible configuration for VUs, duration, parallelism, and more
The solution consists of several key components:
- Load Test Infrastructure: ECS cluster with auto-scaling EC2 instances
- K6 Container: Multi-container task with init, K6 execution, and observability
- Executor: Step Functions-based orchestration with support for distributed execution
- Dashboard: Real-time CloudWatch dashboard for monitoring test progress
- Auto Scaler: Dynamic scaling of compute resources based on test requirements
- AWS CLI configured with appropriate permissions
- Node.js 18+ and npm/yarn
- Docker
- Add dependency:
npm install @pacovk/k6-executor-cluster
or when using yarn
yarn install @pacovk/k6-executor-cluster
- Configure your load test
(see example)
new K6LoadTest(app, "K6LoadTest", {
loadTestConfig: {
serviceName: "my-app",
// ... other config
},
infrastructureConfig: {
// ... infrastructure config
},
});
- Deploy the stack:
npx cdk deploy <your-loadtest-stack>
Configure your load test parameters by modifying bin/loadTest.ts
:
const loadTestConfig: LoadTestConfig = {
serviceName: "my-app", // Service name for metrics
image: ContainerImage.fromRegistry("grafana/k6"), // K6 Docker image
entrypoint: "tests/loadtest.ts", // Path to your test script
vus: 10, // Number of virtual users
duration: "5m", // Test duration
parallelism: 2, // Number of parallel executors
repository: {
httpsCloneUrl: "https://github.com/user/repo.git",
accessTokenSecretName: "github-token", // AWS SSM parameter name
vcsProvider: VcsProvider.GITHUB, // or VcsProvider.GITLAB
},
secrets: {
// Optional: Additional secrets
API_KEY: "/path/to/ssm/parameter",
},
environmentVars: {
// Optional: Environment variables
BASE_URL: "https://api.example.com",
},
extraArgs: ["--quiet"], // Optional: Any additional K6 arguments
};
const infrastructureConfig: InfrastructureConfig = {
instanceType: InstanceType.of(InstanceClass.T4G, InstanceSize.MEDIUM),
timeout: Duration.minutes(30), // Maximum test execution time
memoryReservationMiB: 1024, // Initial mMemory for K6 container
otelVersion: "0.123.0", // OpenTelemetry Collector version
vpc: undefined, // Optional: Use existing VPC
};
Override configuration at deployment time using CDK context:
# Deploy with custom parameters
cdk deploy -c vus=50 -c duration=10m -c parallelism=5
# Or set in cdk.json
{
"context": {
"vus": 50,
"duration": "10m",
"parallelism": 5
}
}
- Create an access token in your VCS provider (GitHub/GitLab)
- Store the token in AWS SSM Parameter Store:
aws ssm put-parameter \ --name "/loadtest/github-token" \ --value "your-token-here" \ --type "SecureString"
Store any additional secrets your tests need:
aws ssm put-parameter \
--name "/loadtest/api-key" \
--value "your-api-key" \
--type "SecureString"
Deploy into an existing VPC:
import { Vpc } from "aws-cdk-lib/aws-ec2";
const vpc = Vpc.fromLookup(this, "ExistingVpc", {
vpcId: "vpc-12345678",
});
const infrastructureConfig: InfrastructureConfig = {
// ... other config
vpc: vpc,
};
The solution automatically creates a comprehensive CloudWatch dashboard with:
- Performance Overview: VUs, response times, request rates
- HTTP Metrics: Request count, failure rates, duration statistics
- Transfer Rates: Data sent/received over time
- Iteration Metrics: Test iteration performance
All metrics are published to CloudWatch under the LOADTEST/K6
namespace:
vus
: Number of virtual usershttp_req_duration
: HTTP request durationhttp_reqs
: Number of HTTP requestsdata_received
/data_sent
: Data transfer metricsiteration_duration
: Test iteration timingFailedRequests
: Count of failed requests
The load test execution follows this workflow:
- Infrastructure Scaling: Auto Scaling Group scales out to required capacity
- Container Initialization: Init container clones your test repository
- Distributed Execution: Multiple K6 containers execute tests in parallel
- Metrics Collection: OpenTelemetry Collector aggregates and exports metrics
- Infrastructure Cleanup: Auto Scaling Group scales down to zero after completion
- Use ARM-based instances for performance and cost efficiency
- Set realistic timeouts to prevent runaway executions
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details.
- K6 - Load testing framework
- AWS CDK - Infrastructure as Code
- OpenTelemetry - Observability framework
For questions and support:
- Check the issues page for known problems
- Create a new issue for bugs or feature requests
- Review the AWS CDK and K6 documentation for framework-specific questions
Author: pascal.euhus