-
Notifications
You must be signed in to change notification settings - Fork 176
feat(metrics): use async local storage for metrics #4663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
b387413
to
bd08c00
Compare
2a9645b
to
bc27232
Compare
62d0f60
to
1c3040a
Compare
1c3040a
to
b73b723
Compare
fdf5b61
to
a0a78f4
Compare
a0a78f4
to
f5cae92
Compare
2214107
to
d3d0801
Compare
0c2cbd2
to
d03a774
Compare
d03a774
to
a3340db
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice work here, I haven't tested it and I won't be able to for a while's but the implementation makes a lot of sense - please if possible run some tests on your machine with the correct setup as we discussed.
Other than that, do you have any idea of whether there's a significant performance impact for this new implementation when used on a Lambda on demand?
Yes, I plan to do some perf testing (vs the original version) in a real lambda function on Monday. |
|
So I did some performance tests, just a very simple lambda that adds a few metrics: import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { Metrics } from '@aws-lambda-powertools/metrics';
const metrics = new Metrics({ namespace: 'PerfTest', serviceName: 'lambda-perf' });
export const handler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
// Add dimensions
metrics.addDimension('Environment', 'test');
metrics.addDimension('Region', process.env.AWS_REGION || 'us-east-1');
metrics.addDimension('Version', '1.0.0');
// Add metadata
metrics.addMetadata('requestId', event.requestContext?.requestId || 'unknown');
metrics.addMetadata('userAgent', event.headers?.['User-Agent'] || 'unknown');
metrics.addMetadata('sourceIp', event.requestContext?.identity?.sourceIp || 'unknown');
// Emit 20 different metrics
metrics.addMetric('RequestCount', 'Count', 1);
metrics.addMetric('ProcessingTime', 'Milliseconds', Math.random() * 100);
metrics.addMetric('MemoryUsed', 'Bytes', Math.random() * 1000000);
metrics.addMetric('CpuUtilization', 'Percent', Math.random() * 100);
metrics.addMetric('DatabaseConnections', 'Count', Math.floor(Math.random() * 10));
metrics.addMetric('CacheHits', 'Count', Math.floor(Math.random() * 50));
metrics.addMetric('CacheMisses', 'Count', Math.floor(Math.random() * 10));
metrics.addMetric('ErrorRate', 'Percent', Math.random() * 5);
metrics.addMetric('Throughput', 'Count/Second', Math.random() * 1000);
metrics.addMetric('Latency', 'Milliseconds', Math.random() * 200);
metrics.addMetric('NetworkIO', 'Bytes', Math.random() * 50000);
metrics.addMetric('DiskIO', 'Bytes', Math.random() * 100000);
metrics.addMetric('ActiveUsers', 'Count', Math.floor(Math.random() * 500));
metrics.addMetric('QueueDepth', 'Count', Math.floor(Math.random() * 20));
metrics.addMetric('ResponseSize', 'Bytes', Math.random() * 5000);
metrics.addMetric('RequestSize', 'Bytes', Math.random() * 2000);
metrics.addMetric('ConcurrentExecutions', 'Count', Math.floor(Math.random() * 100));
metrics.addMetric('BusinessMetric1', 'Count', Math.floor(Math.random() * 1000));
metrics.addMetric('BusinessMetric2', 'Count', Math.floor(Math.random() * 500));
metrics.addMetric('CustomCounter', 'Count', Math.floor(Math.random() * 250));
metrics.publishStoredMetrics();
return {
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
message: 'Perf test with 20 metrics emitted',
timestamp: new Date().toISOString(),
metricsEmitted: 20
})
};
}; I just invoked the lambda directly using the SDK. No noticeable performance impact:
|
Summary
This PR adds support for using an async context (specifically from the
InvokeStore
package) in the Metrics utility. This allows users to emit metrics that are isolated specifically to the current lambda invocation, isolated from any other executions.Changes
MetricsStore
MetadataStore
DimensionStore
Metrics
class only accesses the data in these stores through this interface and never touches the stored objects directly.InvokeStore
context: if they are then the metrics are stored in the current async context, otherwise we fallback to a plain instance wide object as per the current implementation.Metrics
class, e.g.,setMetric
will check if the metric already exists and handle converting values into an array if the metric is already there. Likewise with setting timestamps.Metrics
class as a whole.Issue number: closes #4662
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Disclaimer: We value your time and bandwidth. As such, any pull requests created on non-triaged issues might not be successful.