Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add readiness endpoints to processes having initialization delays #8841

Merged
merged 1 commit into from Dec 11, 2019
Merged

add readiness endpoints to processes having initialization delays #8841

merged 1 commit into from Dec 11, 2019

Conversation

pjain1
Copy link
Member

@pjain1 pjain1 commented Nov 7, 2019

Description

Expose readiness endpoint for broker. Add this and historical readiness endpoint to unsecure path list to help during rolling updates.


This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • been tested in a test Druid cluster.
Key changed/added classes in this PR
  • HistoricalResource
  • BrokerResource
  • QueryJettyServerInitializer

@pjain1 pjain1 added this to the 0.17.0 milestone Nov 7, 2019
Returns a flag indicating if the Broker knows about all segments in Zookeeper. This can be used to know when a Broker process is ready to be queried after a restart.
Returns a flag indicating if the Broker knows about all segments in the cluster. This can be used to know when a Broker process is ready to be queried after a restart.

* `/druid/broker/v1/readiness`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have a more general status endpoint instead of having a specific endpoint for 'readiness'. The status endpoint should return a JSON object. Otherwise, this is taking us down a path where we will have API endpoints for everything we want returned from Druid services and things will get messy and complex fast.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think 'readiness' is the incorrect term. I think 'available' is stronger.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readiness is a better term IMHO because it is an official term for health checking that also has well defined semantics:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

More and more apps are exposing health check endpoints to allow for easy integration into orhestration solutions like Kubernetes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/readiness already exists for historicals so I just extended it for brokers as well, the main reason for its existence is to know when the node is ready to serve queries as opposed to /status which just tells if process is up or not. Because of this reason I think we need two endpoints - one for checking when the process is ready to serve and another for checking if the process if up or not. Many monitoring systems rely on HTTP response codes to make decisions about service availability/readiness that's why I think /readiness was added even though /loadstatus already existed. BTW for health also we have two endpoints /status and /status/health introduced in #5087.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with @sascha-coenen readiness is a commonly used term for health checks.
One change though would be to have a generic endpoint /readiness instead of node type specific endpoints.
would suggest removing /druid/<node_type>/v1 prefix.

Copy link
Member Author

@pjain1 pjain1 Nov 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nishantmonu51 Historical already has /druid/historical/v1/readiness so just extended it for broker - /druid/broker/v1/readiness to be consistent, but can change it to just /readiness if required, no preference from my side.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fjy I am not very concerned about what should be the name of the endpoint, my opinion is that we need an endpoint that returns meaningful HTTP status code so that integration with monitoring/orchestration solutions becomes easier. So I think either we can expose just /readiness for all node types that have initialization delay or just add /druid/broker/v1/readiness to broker as /druid/historical/v1/readiness already exists for historicals. What do you think ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 'readiness' is the correct term for this, and putting it at /druid/broker/v1/readiness seems acceptable for me since there is precedent with historicals. A universal /readiness I think might take a bigger refactor since there isn't really a universal concept of readiness across all node types, or any sort of health style interface that all node types implement, but is worth considering in the future.

private static List<String> UNSECURED_PATHS = Collections.singletonList("/status/health");
private static List<String> UNSECURED_PATHS = Lists.newArrayList(
"/status/health",
"/druid/historical/v1/readiness",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use a consistent scheme here. I'd prefer just /status

private static List<String> UNSECURED_PATHS = Lists.newArrayList(
"/status/health",
"/druid/historical/v1/readiness",
"/druid/broker/v1/readiness"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

@sascha-coenen
Copy link

BTW: readiness endpoints alone will not suffice to integrate Druid well with Kubernetes.
Reason being that Kubernetes will only allow incoming requests to hit a service (e.g. a Historical ) after its readiness check passed. However, Druid uses zookeeper to discover if a new historical has joined the cluster. So as soon an a historical reports its existence to zookeeper (or to the coordinator if http announcement is configured), then the brokers think that the historical is ready for use and will start sending queries to it. However, those queries will not arrive at those historicals unless their readiness healthcheck has also passed already. So there is this intermittent timespan during which a broker cannot reach a historical and during this time all queries will fail because the broker will keep contacting the same historical, even if query-retries are configured. On retry, the same historical gets picked.

I was wandering how to improve this behaviour during rolling updates.

  • One improvement would be to have the broker graylist a failing historical during retries.

  • Another improvement would be to add a configurable delay "druid.server.http.announcePropagationDelay" such that after a zookeeper announcement of a newly joined service (e.g. historical), there would be some delay until other services (e.g. brokers) would start to interact with the joined service. This would be similar to the existing property "druid.server.http.unannouncePropagationDelay"
    The purpose of this would be that if Kubernetes is set up to execute a readiness check every 30 seconds, one could set the announcePropagationDelay to a larger time like 45 seconds, so that there would be no situation in which one Druid service tries to talk to another one at a time when Kubernetes is not (yet) allowing incoming traffic to the latter service.

I just meant to mention it in this place because when I saw the title of this PR, especially the "having initialization delays" I hoped that the PR would also be about adding this internal delay.

I have so far not found another workaround in kubernetes yet. It is certainly foremost a weakness in kubernetes that causes this misbehaviour because the readiness concept violates the single-responsibilty principle. A readiness check is used for two things:
a) to decide whether incoming network connections are allowed for a given pod
b) to decide whether a rolling update can already continue updating the next pod

Unfortunately there seems to be no way to have separate timings for these two unrelated aspects in k8s.

@pjain1
Copy link
Member Author

pjain1 commented Nov 10, 2019

@sascha-coenen Historicals announce themselves after they have loaded the segments from local cache so broker will know about it existence when it is ready to serve the queries so I don't think there is a problem in that regards.

@leventov
Copy link
Member

leventov commented Nov 23, 2019

I think this type of PRs (exposing a new HTTP API) should come with an integration test. (This message should not be considered a blocking review comment on the PR, just a suggestion.)

@leventov
Copy link
Member

@sascha-coenen I didn't really understand your message but #6702 might be something relevant

Copy link
Member

@clintropolis clintropolis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm 👍

Returns a flag indicating if the Broker knows about all segments in Zookeeper. This can be used to know when a Broker process is ready to be queried after a restart.
Returns a flag indicating if the Broker knows about all segments in the cluster. This can be used to know when a Broker process is ready to be queried after a restart.

* `/druid/broker/v1/readiness`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 'readiness' is the correct term for this, and putting it at /druid/broker/v1/readiness seems acceptable for me since there is precedent with historicals. A universal /readiness I think might take a bigger refactor since there isn't really a universal concept of readiness across all node types, or any sort of health style interface that all node types implement, but is worth considering in the future.

Copy link
Contributor

@jon-wei jon-wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants