You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are recent cases that dashboard exits when RPC to GCS failed: (ses_7tcnunfk2da5qwby5t946m5tw7) (g-b7d2114c85b440001) [/tmp/ray/session_2024-06-09_14-01-52_504758_3738/logs/dashboard.log] 2024-06-10 06:41:53,037 ERROR head.py:198 -- Dashboard exiting because it received too many GCS RPC errors count: 41, threshold is 40.
Several previous tickets also describe the same or similar issue: #39822 #31261 #16328
From observability point of view, dashboard should never exit when GCS exits or is unreachable. So what was the reason we designed the current dashboard exit behavior? Can we change it to not exit?
Versions / Dependencies
ray:2.9.0
Reproduction script
N/A
Issue Severity
None
The text was updated successfully, but these errors were encountered:
ruisearch42
added
bug
Something that is supposed to be working; but isn't
triage
Needs triage (eg: priority, bug/not-bug, and owning component)
core
Issues that should be addressed in Ray Core
labels
Jun 13, 2024
alexeykudinkin
added
P0
Issues that should be fixed in short order
and removed
triage
Needs triage (eg: priority, bug/not-bug, and owning component)
labels
Jun 13, 2024
We can simply remove the Dashboard check on gcs liveness, or follow the config RAY_gcs_rpc_server_reconnect_timeout_s used by raylet and workers.
However we also need to update PythonGcsClient to do proper retrying. For example, when GCS is down, now PythonGcsClient keeps all pending requests until the GCS is back alive. If there are so many reqs, the client may OOM. The C++ GcsClient, on the other hand, blocks the thread when the amount of pending reqs exceeds RAY_gcs_grpc_max_request_queued_max_bytes. We need to adapt this behavior to PythonGcsClient, by removing the latter and binding the C++ GcsClient to Python side.
we can resolve this issue here: remove "number of failed checks". Since it now uses GcsClient (GcsRpcClient) it will inherit its "kill self on gcs_rpc_server_reconnect_timeout_s timeout` behavior and wo don't need to do anything on python side.
What happened + What you expected to happen
There are recent cases that dashboard exits when RPC to GCS failed:
(ses_7tcnunfk2da5qwby5t946m5tw7) (g-b7d2114c85b440001) [/tmp/ray/session_2024-06-09_14-01-52_504758_3738/logs/dashboard.log] 2024-06-10 06:41:53,037 ERROR head.py:198 -- Dashboard exiting because it received too many GCS RPC errors count: 41, threshold is 40.
Several previous tickets also describe the same or similar issue:
#39822
#31261
#16328
From observability point of view, dashboard should never exit when GCS exits or is unreachable. So what was the reason we designed the current dashboard exit behavior? Can we change it to not exit?
Versions / Dependencies
ray:2.9.0
Reproduction script
N/A
Issue Severity
None
The text was updated successfully, but these errors were encountered: