Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve flaky endpoint handling #205

Merged
merged 9 commits into from Jan 30, 2017
Merged

Conversation

norman
Copy link
Contributor

@norman norman commented Jan 30, 2017

I recently needed to work around some issues with a flaky RETS server that would randomly time out some requests. The responses would come back with HTTP 200 but with an error message in the response body, and happened only sporadically.

These modifications are perhaps a bit short on aesthetics and long on pragmatism but I think would help in circumstances that are likely to appear at times in applications that use this library with lot of different RETS endpoints.

Summary of changes:

  • Allow configurable patterns to detect errors in a successfully delivered response body.
  • The logging messages previously hard-coded the max number of retries as "3", fix.
  • Allow max retries to be configured in the client block of a YML config file so that it can be applied to all requests.
  • Allow an optional wait time between requests. Without this, the server I was working on would just fail 20 times in a row, but with a timeout I was able to make it eventually return a useful response.

@norman norman force-pushed the improve-flaky-endpoint-handling branch 3 times, most recently from d6c52e3 to 775b700 Compare January 30, 2017 18:25
Copy link
Contributor

@dougcole dougcole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two minor nitpicks, but overall this looks good! Feel free to fix or ignore my suggestions.

@client.options[:recoverable_error_wait_secs] = 3.14
@client.expects(:sleep).with(3.14).times(3)
@client.stubs(:find_every).raises(Rets::MiscellaneousSearchError.new(0, 'Foo'))
@client.find(:all, :foo => :bar) rescue nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel a little funny about this rescue nil. I assume this line raises an exception, can we wrap this line in a expect {...}.to raise_error instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thoughts here were that asserting on the raised error might make it less apparent that the test was about the sleep rather than the exception being raised.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense... I have a long cultivated aversion to rescue nil, but I suppose it makes sense in this context.

@@ -105,9 +105,11 @@ def find_with_given_retry(retries, opts)
end

def handle_find_failure(retries, opts, e)
if retries < opts.fetch(:max_retries, 3)
max_retries = opts.fetch(:max_retries, options.fetch(:max_retries, 3))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

opts and options both being used here is a little hard to read, might be worth changing the name of the local variable (local_options?), but I'm not too worried about it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of c43b478?

@dougcole
Copy link
Contributor

Looks good!

@norman norman merged commit f713d7a into master Jan 30, 2017
@norman norman deleted the improve-flaky-endpoint-handling branch January 30, 2017 20:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants