Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Insights get_resource_metrics truncates response #3784

Closed
corey-cole opened this issue Jul 14, 2023 · 2 comments
Closed

Performance Insights get_resource_metrics truncates response #3784

corey-cole opened this issue Jul 14, 2023 · 2 comments
Assignees
Labels
bug This issue is a confirmed bug. p2 This is a standard priority issue

Comments

@corey-cole
Copy link

Describe the bug

The Performance Insights call get_resource_metrics truncates the response without notifying caller. This happens both with and without specifying the maximum number of results.

Expected Behavior

According to the documentation:

MaxResults (integer) – The maximum number of items to return in the response. If more items exist than the specified MaxRecords value, a pagination token is included in the response so that the remaining results can be retrieved.

I would expect that either the documentation is correct, or that if the documentation is incorrect, that a request with more data than can be returned in a single call will result in the presence of a NextToken value.

Current Behavior

When calling with MaxResults equal to one and a time window of several minutes to several days w/ a 60 second rollup, the response does not contain a NextToken as specified in the documentation.

When calling without MaxResults and a time window of several days, the response truncates to ~350 results without a NextToken.

Reproduction Steps

NOTE: This RDS instance has 7 days of PI retention, and as of the moment of writing this bug, the requested start time

response = client.get_resource_metrics(**kwargs)
pprint.pprint(response)
{'AlignedEndTime': datetime.datetime(2023, 7, 13, 10, 51, tzinfo=tzlocal()),
'AlignedStartTime': datetime.datetime(2023, 7, 13, 5, 1, tzinfo=tzlocal()),
'Identifier': 'db-REPLACEME',
'MetricList': [{'DataPoints': [{'Timestamp': datetime.datetime(2023, 7, 13, 5, 2, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 3, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 4, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 5, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 6, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 7, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 8, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 9, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 10, tzinfo=tzlocal()),
'Value': 0.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 11, tzinfo=tzlocal()),
'Value': 0.0}],
'Key': {'Metric': 'db.load.avg'}},
{'DataPoints': [{'Timestamp': datetime.datetime(2023, 7, 13, 5, 2, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 3, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 4, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 5, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 6, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 7, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 8, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 9, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 10, tzinfo=tzlocal()),
'Value': 2.0},
{'Timestamp': datetime.datetime(2023, 7, 13, 5, 11, tzinfo=tzlocal()),
'Value': 2.0}],
'Key': {'Metric': 'os.general.numVCPUs.avg'}}],
'ResponseMetadata': {'HTTPHeaders': {'connection': 'keep-alive',
'content-length': '1007',
'content-type': 'application/x-amz-json-1.1',
'date': 'Fri, 14 Jul 2023 20:40:35 GMT',
'x-amzn-requestid': '23caba1e-de95-4f27-a349-5bc2f1a2e8e3'},
'HTTPStatusCode': 200,
'RequestId': '23caba1e-de95-4f27-a349-5bc2f1a2e8e3',
'RetryAttempts': 0}}

pprint.pprint(kwargs)
{'EndTime': datetime.datetime(2023, 7, 13, 10, 51, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=61200))),
'Identifier': 'db-REPLACEME',
'MaxResults': 10,
'MetricQueries': [{'Metric': 'db.load.avg'},
{'Metric': 'os.general.numVCPUs.avg'}],
'PeriodInSeconds': 60,
'ServiceType': 'RDS',
'StartTime': datetime.datetime(2023, 7, 8, 10, 30, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=61200)))}

Possible Solution

There's a potential workaround, but I'm leery of using it as it seems brittle according to the documentation (see below). From inspection of results, AlignedStartTime matches up with the earliest date present in the results. IF I can rely on that counter to what the documentation says, then it would be on me to inspect AlignedStartTime (and potentially the time delta from AlignedEndTime) and split my calls into multiple smaller time windows and recombine later.

AlignedStartTime (datetime) –

The start time for the returned metrics, after alignment to a granular boundary (as specified by PeriodInSeconds). AlignedStartTime will be less than or equal to the value of the user-specified StartTime.

AlignedEndTime (datetime) –

The end time for the returned metrics, after alignment to a granular boundary (as specified by PeriodInSeconds). AlignedEndTime will be greater than or equal to the value of the user-specified Endtime.

Additional Information/Context

Link to boto3 documentation: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/pi/client/get_resource_metrics.html

SDK version used

1.28.3

Environment details (OS name and version, etc.)

MacOS 12.6.6 w/ Python 3.10.12

@corey-cole corey-cole added bug This issue is a confirmed bug. needs-triage This issue or PR still needs to be triaged. labels Jul 14, 2023
@RyanFitzSimmonsAK RyanFitzSimmonsAK self-assigned this Jul 14, 2023
@RyanFitzSimmonsAK RyanFitzSimmonsAK added investigating This issue is being investigated and/or work is in progress to resolve the issue. p2 This is a standard priority issue and removed needs-triage This issue or PR still needs to be triaged. labels Jul 14, 2023
@RyanFitzSimmonsAK
Copy link
Contributor

Closing as this was addressed internally.

@RyanFitzSimmonsAK RyanFitzSimmonsAK removed the investigating This issue is being investigated and/or work is in progress to resolve the issue. label Jul 18, 2023
@snmatus
Copy link

snmatus commented Mar 2, 2024

Is this a bug? Is there any workaround?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a confirmed bug. p2 This is a standard priority issue
Projects
None yet
Development

No branches or pull requests

3 participants