Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(metrics): add downloadIntelligenceModelsReqResp to latency metrics #3549

Conversation

maxinteger
Copy link
Contributor

COMPLETES SPARK-503933

This pull request addresses

Implement new downloadIntelligenceModelsReqResp latency for CA metrics

by making the following changes

Updated the types and added new utility function called accumulateLatency

Change Type

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update
  • Tooling change
  • Internal code refactor

The following scenarios where tested

< ENUMERATE TESTS PERFORMED, WHETHER MANUAL OR AUTOMATED >

I certified that

  • I have read and followed contributing guidelines

  • I discussed changes with code owners prior to submitting this pull request

  • I have not skipped any automated checks

  • All existing and new tests passed

  • I have updated the documentation accordingly


Make sure to have followed the contributing guidelines before submitting.

@maxinteger maxinteger added the validated If the pull request is validated for automation. label Apr 19, 2024
@maxinteger maxinteger requested a review from a team as a code owner April 19, 2024 14:53
Copy link

This pull request is automatically being deployed by Amplify Hosting (learn more).

Access this pull request here: https://pr-3549.d3m3l2kee0btzx.amplifyapp.com

Comment on lines 131 to 124
public accumulateLatency(callback: () => Promise<unknown>, key: PreComputedLatencies) {
const start = performance.now();

return callback().finally(() => {
const currentLatency = this.precomputedLatencies.get(key) ?? 0;
this.saveLatency(key, currentLatency + (performance.now() - start), true);
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we already support this, see saveLatency overwrite param

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This a special case of the overwrite when we want to add the current value to the next one before overwrite it.

It is doable with measureLatency as well, but we need to access to the current value and I really do not want to use precomputedLatencies inside cantina. Actually precomputedLatencies should be private.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

never mind, I just checked again and the saveLatency does what you described.

The overwrite name is a bit misleading, because it not just simple overwrite, but also accumulate values

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

measureLatency exposes the overwrite property too, so you don't have to access precomputedLatencies directly in cantina

Copy link
Contributor

@torgeadelin torgeadelin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one issue

@@ -93,31 +94,31 @@ export default class CallDiagnosticLatencies extends WebexPlugin {
* Store precomputed latency value
* @param key - key
* @param value - value
* @param overwrite - overwrite existing value or add it
* @param accumulate - when it true, it overwrite existing value with sum of the current value and the new measurement otherwise just store the new measurement
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overwrites

* @throws
* @returns
*/
public saveLatency(key: PreComputedLatencies, value: number, overwrite = true) {
const existingValue = overwrite ? 0 : this.precomputedLatencies.get(key) || 0;
public saveLatency(key: PreComputedLatencies, value: number, accumulate = true) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just realised, we've changed the meaning here too.
Should I accumulate?

  • accumulate = true it overrides.
  • accumulate = false, it accumulates.

Also the default for this method should be to overwrite.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So maybe change the default value and swap the tertiary statement outcomes on line 102.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, I fixed all of them

Copy link
Contributor

@torgeadelin torgeadelin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one small issue

@maxinteger maxinteger force-pushed the feat/SPARK-503933-add-downloadIntelligenceModelsReqResp-latency-metric branch 2 times, most recently from e5f1347 to 41a1c80 Compare April 24, 2024 11:08
Comment on lines +64 to 69
it('should save latency correctly when accumulate is true', () => {
assert.deepEqual(cdl.precomputedLatencies.size, 0);
cdl.saveLatency('internal.client.pageJMT', 10, false);
cdl.saveLatency('internal.client.pageJMT', 10, true);
assert.deepEqual(cdl.precomputedLatencies.size, 1);
assert.deepEqual(cdl.precomputedLatencies.get('internal.client.pageJMT'), 10);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't seem right?
i mean it doesn't make sense now, but i think it didn't make sense before either

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks good to me. It tests the case when there is no current value. Here we can end up with NaN when the missing value not handled correctly.

});

it('checks measureLatency when callBack rejects', async () => {
const key = 'internal.client.pageJMT';
const overwrite = true;
const accumulate = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in order to keep the test the same as before, we should say accumulate = false, as before it was overwrite = true

Copy link
Contributor

@torgeadelin torgeadelin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some more comments, but non really blocking.
also, there's a lot of linting changes, not sure if we should add those

@maxinteger
Copy link
Contributor Author

@torgeadelin I think the linting is coming from a recently merged PR, there is not too many and all related to metrics plugin so I think it is OK

@@ -149,7 +150,8 @@ describe('internal-plugin-metrics', () => {
});

it('fails if preLoinId is not set', async () => {
webex.internal.newMetrics.callDiagnosticMetrics.preLoginMetricsBatcher.preLoginId = undefined;
webex.internal.newMetrics.callDiagnosticMetrics.preLoginMetricsBatcher.preLoginId =
undefined;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this change?

// avoid setting .sent timestamp
webex.internal.newMetrics.callDiagnosticMetrics.preLoginMetricsBatcher.prepareRequest = (
q
) => Promise.resolve(q);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again why has this changed

Copy link
Contributor

@shnaaz shnaaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the changes in indentation intentional? Seems like some are but some are not?

Copy link
Contributor

@shnaaz shnaaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check latest comment

key: PreComputedLatencies,
overwrite = false
accumulate = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you planning to change definition in Cantina too, I think we should

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have accumulateLatency anymore right?

@maxinteger
Copy link
Contributor Author

@shnaaz I just updated my branch to the latest beta and run formatting, probably these lines where not correctly formatted.

I can roll it back if you want to.

Copy link
Contributor

@shnaaz shnaaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just roll back couple of linter changes we discussed, otherwise looks good

@maxinteger maxinteger force-pushed the feat/SPARK-503933-add-downloadIntelligenceModelsReqResp-latency-metric branch from 2425295 to c730d70 Compare April 26, 2024 10:30
@maxinteger maxinteger merged commit d39b06a into webex:beta Apr 26, 2024
2 checks passed
@maxinteger maxinteger deleted the feat/SPARK-503933-add-downloadIntelligenceModelsReqResp-latency-metric branch April 26, 2024 15:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
validated If the pull request is validated for automation.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants