Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ray dashboard should not exit when GCS exits or becomes unavailable #45940

Open
ruisearch42 opened this issue Jun 13, 2024 · 4 comments
Open
Assignees
Labels
bug Something that is supposed to be working; but isn't core Issues that should be addressed in Ray Core P0 Issues that should be fixed in short order

Comments

@ruisearch42
Copy link
Contributor

ruisearch42 commented Jun 13, 2024

What happened + What you expected to happen

There are recent cases that dashboard exits when RPC to GCS failed:
(ses_7tcnunfk2da5qwby5t946m5tw7) (g-b7d2114c85b440001) [/tmp/ray/session_2024-06-09_14-01-52_504758_3738/logs/dashboard.log] 2024-06-10 06:41:53,037 ERROR head.py:198 -- Dashboard exiting because it received too many GCS RPC errors count: 41, threshold is 40.
Several previous tickets also describe the same or similar issue:
#39822
#31261
#16328

From observability point of view, dashboard should never exit when GCS exits or is unreachable. So what was the reason we designed the current dashboard exit behavior? Can we change it to not exit?

Versions / Dependencies

ray:2.9.0

Reproduction script

N/A

Issue Severity

None

@ruisearch42 ruisearch42 added bug Something that is supposed to be working; but isn't triage Needs triage (eg: priority, bug/not-bug, and owning component) core Issues that should be addressed in Ray Core labels Jun 13, 2024
@ruisearch42
Copy link
Contributor Author

cc: @alexeykudinkin

@alexeykudinkin alexeykudinkin added P0 Issues that should be fixed in short order and removed triage Needs triage (eg: priority, bug/not-bug, and owning component) labels Jun 13, 2024
@ruisearch42
Copy link
Contributor Author

Per discussion, @rynewang has a related task and will address this altogether.

@rynewang
Copy link
Contributor

We can simply remove the Dashboard check on gcs liveness, or follow the config RAY_gcs_rpc_server_reconnect_timeout_s used by raylet and workers.

However we also need to update PythonGcsClient to do proper retrying. For example, when GCS is down, now PythonGcsClient keeps all pending requests until the GCS is back alive. If there are so many reqs, the client may OOM. The C++ GcsClient, on the other hand, blocks the thread when the amount of pending reqs exceeds RAY_gcs_grpc_max_request_queued_max_bytes. We need to adapt this behavior to PythonGcsClient, by removing the latter and binding the C++ GcsClient to Python side.

This involves (in this order):

@rynewang
Copy link
Contributor

Process:

  1. [core] Refactor how PythonGcsClient treats errors #45817
  2. [core] The New GcsClient binding #46186
  3. we can resolve this issue here: remove "number of failed checks". Since it now uses GcsClient (GcsRpcClient) it will inherit its "kill self on gcs_rpc_server_reconnect_timeout_s timeout` behavior and wo don't need to do anything on python side.
  4. gcs client async to reimplement GcsAioClient

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something that is supposed to be working; but isn't core Issues that should be addressed in Ray Core P0 Issues that should be fixed in short order
Projects
None yet
Development

No branches or pull requests

3 participants