-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] RayService with GCS FT HA issue #1551
Conversation
cc @YQ-Wang |
@@ -12,6 +12,11 @@ spec: | |||
runtime_env: | |||
working_dir: "https://github.com/ray-project/serve_config_examples/archive/b393e77bbd6aba0881e3d94c05f968f05a387b96.zip" | |||
pip: ["python-multipart==0.0.6"] | |||
deployments: | |||
- name: ImageClassifier |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did this YAML even work before without a deployments
field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only 1 replica will be deployed on the head Pod, as KubeRay submits the Serve application once the head node is ready. In most cases, workers are not ready.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR description and fix makes sense to me.
By the way, in general, if the user specifies N>1 replicas, do they need to do any additional work to prevent the case that all N are scheduled on the head node? My guess is no, Ray Serve should automatically spread them out across the worker nodes.
In any case, we should probably add in the Serve HA documentation that num_replicas should be set to something greater than 1. (It sounds basic but it's still better to say it explicitly)
If I understand correctly, unfortunately, users must set the relevant scheduling configurations (e.g., num_cpus) to ensure that more than one Pod contains the deployment's replica. We also track the progress in #1492.
I will add a section for high-availability in the RayService document. |
Why are these changes needed?
According to the comment in issue #1463, only the Ray head and workers with Serve replicas will have an HTTPProxyActor in a stable state. Without this PR, if the replica is scheduled on the head, there will be only one replica in the cluster and the worker will not have an HTTPProxyActor. Hence, the K8s serve service has only one endpoint from the head, and this endpoint will be removed when the head Pod is terminated, and thus there will be no HTTPProxyActor & replica available.
This PR makes sure both the head Pod and worker Pod have 1 replica and 1 HTTPProxyActor. Hence, when the head Pod is deleted, the K8s serve service still has an endpoint from worker to achieve high-availability.
Related issue number
Closes #1463
Checks
I made some changes based on @YQ-Wang's reproduction script.
Step 1: Create a GCS FT-enabled RayService with this gist.
Step 2: Create a Ray Pod by this gist, and run the following script to send requests to the RayService.
Step 3: Kill the head Pod. Typically, only the 0 to 2 requests that were being processed by the Ray head at that moment will be dropped. Users should implement application-level retry to avoid failure.