Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Obey HTTP Status codes from upstream. #221

Merged
merged 15 commits into from
Nov 8, 2023
Merged

Conversation

chriswk
Copy link
Contributor

@chriswk chriswk commented Nov 7, 2023

Previously we have treated any status code as equal. This PR changes that to allow 429 and 50x status codes to incrementally increase the interval between each time we call. And then gradually decrease once it starts succeeding.

Discussion points

  • a bit worried about the metrics here, because we drop the metric bucket on the floor if we don't get a 20x back from the server. We did that previously as well, so there's nothing new here. A failing call to sendMetrics would dump the metrics on the floor, since we new up a new bucket before sending the previous one.

@@ -28,6 +38,7 @@ public UnleashMetricServiceImpl(
this.started = LocalDateTime.now(ZoneId.of("UTC"));
this.unleashConfig = unleashConfig;
this.metricSender = metricSender;
this.maxInterval = Integer.max(20, 300 / Integer.max(Long.valueOf(unleashConfig.getSendMetricsInterval()).intValue(), 1));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to discuss this, our goal was that the max should equal 300 seconds at most. because 5 minute interval seems to be enough. I realize the metric sending could be bigger. Suggestions?

Previously we have treated any status code as equal. This PR changes
that to allow 429 and 50x status codes to incrementally increase the
interval between each time we call. And then gradually decrease once it
starts succeeding.

I'm a bit worried about the metrics here, because we drop the metric
bucket on the floor if we don't get a 20x back from the server.
@ivarconr
Copy link
Member

ivarconr commented Nov 7, 2023

drop the metric bucket on the floor if we don't get a 20x back from the server.

I agree this is a concern but I think it needs to be solved in a separate PR. Other SDKs will preserve the metrics and send them on the next interval.

Comment on lines +62 to +70
assertThat(testSubscriber.togglesFetchedCounter)
.isEqualTo(
2); // Server successfully returns, we call synchronous fetch and schedule
// once, so 2 calls.
assertThat(testSubscriber.status).isEqualTo(CHANGED);
assertThat(testSubscriber.toggleEvaluatedCounter).isEqualTo(3);
assertThat(testSubscriber.toggleName).isEqualTo("myFeature");
assertThat(testSubscriber.toggleEnabled).isFalse();
assertThat(testSubscriber.errors).hasSize(2);
assertThat(testSubscriber.errors).isEmpty();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this looks better 👍

Christopher Kolstad and others added 4 commits November 8, 2023 09:04
Copy link
Contributor

@gastonfournier gastonfournier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Just a small comment about externalizing the throttling behavior into an instance so we don't repeat the code, but other than that this looks neat!

@chriswk
Copy link
Contributor Author

chriswk commented Nov 8, 2023

LGTM! Just a small comment about externalizing the throttling behavior into an instance so we don't repeat the code, but other than that this looks neat!

Handled with adding the throttler class.

@chriswk chriswk merged commit c984c23 into main Nov 8, 2023
4 checks passed
@chriswk chriswk deleted the feat/handleBackoffs branch November 8, 2023 10:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants