Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

31 should retry on 5xx #62

Closed
wants to merge 8 commits into from
Closed

31 should retry on 5xx #62

wants to merge 8 commits into from

Conversation

ghost
Copy link

@ghost ghost commented Jun 7, 2017

Summary

Provide retry on server errors.

Description

  • I provided getRetryOn5xxClient function in ClientConfigurationUtils, that wraps SphereClient into a RetrySphereClientDecorator.
  • I used that function in every sync's constructor while instantiating services.
  • I provided up to 27 retries with durations between each retry 2s, 2s, 2s, 4s, 4s, 4s, 8s, ... , 512s, for each failed request.
  • I wrote relevant unit tests.

Relevant Issues

#31

Todo

Hints for Review

Please express opinion about retry strategy, and if it should be more customizable within a method parameters.

@codecov-io
Copy link

codecov-io commented Jun 7, 2017

Codecov Report

Merging #62 into master will increase coverage by 0.27%.
The diff coverage is 75%.

Impacted file tree graph

@@             Coverage Diff              @@
##             master      #62      +/-   ##
============================================
+ Coverage     83.35%   83.63%   +0.27%     
- Complexity      259      262       +3     
============================================
  Files            36       36              
  Lines           721      727       +6     
  Branches         25       25              
============================================
+ Hits            601      608       +7     
+ Misses          119      118       -1     
  Partials          1        1
Impacted Files Coverage Δ Complexity Δ
.../commercetools/sync/inventories/InventorySync.java 94.84% <0%> (-0.99%) 35 <0> (ø)
...s/sync/commons/utils/ClientConfigurationUtils.java 33.33% <100%> (+33.33%) 3 <2> (+3) ⬆️
...om/commercetools/sync/commons/BaseSyncOptions.java 100% <100%> (ø) 13 <0> (ø) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update de3ca86...b375237. Read the comment docs.

@ghost ghost requested review from heshamMassoud and piobra June 9, 2017 15:16
Jakub Chrobak added 2 commits June 14, 2017 14:26
Conflicts:
	src/main/java/com/commercetools/sync/services/impl/CategoryServiceImpl.java
	src/main/java/com/commercetools/sync/services/impl/InventoryServiceImpl.java
@@ -53,8 +54,8 @@
*/
public CategorySync(@Nonnull final CategorySyncOptions syncOptions) {
this(syncOptions,
new TypeServiceImpl(syncOptions.getCtpClient()),
new CategoryServiceImpl(syncOptions.getCtpClient()));
new TypeServiceImpl(getRetryOn5xxClient(syncOptions.getCtpClient())),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't do it here on every time we need to get the client, but rather one time when it's injected and bound to the options.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I forgot about the options. Good point 👍

public class ClientConfigurationUtils {
private static BlockingSphereClient ctpClient;
private static final long DEFAULT_TIMEOUT = 30;
private static final TimeUnit DEFAULT_TIMEOUT_TIME_UNIT = TimeUnit.SECONDS;
private static final int RETRY_ON_5XX_ATTEMPTS_LIMIT = 27;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why 27?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also why is it a global variable and not passed as a parameter to the method it's used in?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

27 fits into proposed default strategy. More about it in a comment below.


/**
* Given a {@link RetryContext} it returns duration for a current attempt. The duration is calculated from a
* following equation: <pre>duration = 2 ^ (((retryAttemptNumber - 1) / 3) + 1) [seconds]</pre>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please explain the idea behind this strategy?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About the strategy: it is only my proposition. In fact I have no clue what retry strategy is wanted here. I have expressed my concerns some time ago in a issue #31 (comment)

This particular strategy seemed reasonable to me: first it performs few attempts with a little duration and then the time is increased. There is 27 attempts so it sum to almost 2 hours of waiting (e.g. when the server is down).

Please share your opinion about what is expected here @heshamMassoud

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

retry strategy: https://github.com/sphereio/sphere-node-utils/blob/master/src/coffee/helpers/repeater.coffee#L90-L96

Provide through IT as an example, not function of sync..

Jakub Chrobak added 3 commits June 16, 2017 13:55
Conflicts:
	src/main/java/com/commercetools/sync/inventories/InventorySync.java
Conflicts:
	src/main/java/com/commercetools/sync/commons/utils/ClientConfigurationUtils.java
@heshamMassoud heshamMassoud deleted the 31_should_retry_on_5xx branch August 28, 2017 09:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants