Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rest API: Ensure 503 signals == retry on another node #4066

Closed
clintongormley opened this issue Nov 4, 2013 · 10 comments
Closed

Rest API: Ensure 503 signals == retry on another node #4066

clintongormley opened this issue Nov 4, 2013 · 10 comments
Assignees

Comments

@clintongormley
Copy link

The 503 status code is used by the clients to signal "I can't handle this for some reason, but you can retry on another node"

For instance, if a node can't see the minimum master nodes, then it returns a 503, in which case the clients should try another node in the list.

At times, 503 has been used for other types of responses (eg no indices available to search on). Worth going through the codebase to check that 503s are used consistently.

@kimchy
Copy link
Member

kimchy commented Dec 2, 2013

The only other place that 503 is used in ES is when rejecting requests due to thread poll constraints, and I think 503 is correct for it?

@clintongormley
Copy link
Author

Yes agreed - that is an appropriate use case.

If those are the only two places, then I'll close this issue.

@kimchy
Copy link
Member

kimchy commented Dec 2, 2013

but what will we do in the clients then? thread pool rejection is different compared to a node returning 503 because its no longer part of the cluster... . I do think we need to somehow tell the different from the clients perspective?

@clintongormley clintongormley reopened this Dec 2, 2013
@clintongormley
Copy link
Author

OK - reopened. 503 for me means: retry on another node. I guess if a request on another node gets rerouted back to the same node you could end up with the same 503 response?

What should the clients do for thread pool rejection then?

@kimchy
Copy link
Member

kimchy commented Dec 2, 2013

yea, exactly, a 503 because of thread pool rejection is different, and internally, the language client should not retry, but expose it as a proper failure to the user code, and then they can decide what to do (back off, ...)

@uboness
Copy link
Contributor

uboness commented Dec 2, 2013

maybe we can use the Retry-After header, which will indicate high load (and try to come up with a default value for it :))

@kimchy
Copy link
Member

kimchy commented Dec 2, 2013

@uboness we can't really put a value there..., I think its more geared towards single "usage" type, we can't really sensibly give any value there

@uboness
Copy link
Contributor

uboness commented Dec 2, 2013

yeah.. I know.. hence the :).. that's the closes "official" way I thought of to distinguish between the two scenarios

@xyu
Copy link
Contributor

xyu commented Jun 16, 2014

Maybe use a 502 for when the node is no longer part of the cluster? Not exact but if one considers the elected master the "upstream" server simply because it delivers cluster state needed in order to route queries it kinda works.

@clintongormley
Copy link
Author

Fixed by #6627

@clintongormley clintongormley changed the title Ensure 503 signals: retry on another node Internal: Ensure 503 signals == retry on another node Jul 16, 2014
@clintongormley clintongormley changed the title Internal: Ensure 503 signals == retry on another node Rest API: Ensure 503 signals == retry on another node Jul 16, 2014
@clintongormley clintongormley added the :Core/Infra/REST API REST infrastructure and utilities label Aug 13, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants