Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cherry-pick][doc][train] Clarify error message when trying to use local storage for multi-node distributed training and checkpointing #41844

Merged
merged 1 commit into from
Dec 14, 2023

Conversation

justinvyu
Copy link
Contributor

Why are these changes needed?

Related issue number

This is a cherry-pick of #41832. This has 2 commits because the 2nd pick has many conflicts if merged without the first docs change PR. Both of these are primarily doc changes.

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@architkulkarni
Copy link
Contributor

Doc build failed, restarting tests

@architkulkarni architkulkarni added release-blocker P0 Issue that blocks the release @author-action-required The PR author is responsible for the next step. Remove tag to send back to the reviewer. labels Dec 14, 2023
…or multi-node distributed training and checkpointing (ray-project#41832)

Ray 2.7 removed support for using the head node as the persistent storage for checkpoints and artifacts in a multi-node distributed training. The alternative recommendation is to use cloud storage or a shared filesystem instead via `RunConfig(storage_path)`.

Ray Train/Tune will error if the user attempts to checkpoint `ray.train.report(..., checkpoint=...)` from a worker that's on a remote node. This is because the new assumption is that all worker nodes have access to read/write from the same persistent storage, and the "head node local storage" is not accessible by all nodes.

However, the error message that shows up is confusing. All nodes can technically access the local path in the message -- the problem is that not all nodes can access the the SAME local path. This PR improves the error message to make this more clear and to suggest an actionable fix. This PR also updates most of the getting started user guides to mention the multi-node storage requirement and links to the storage user guide.

---------

Signed-off-by: Justin Yu <justinvyu@anyscale.com>
@architkulkarni architkulkarni merged commit 0c5a3ec into ray-project:releases/2.9.0 Dec 14, 2023
15 of 16 checks passed
@justinvyu justinvyu removed the @author-action-required The PR author is responsible for the next step. Remove tag to send back to the reviewer. label Dec 14, 2023
@justinvyu justinvyu deleted the cp-41832 branch December 14, 2023 20:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-blocker P0 Issue that blocks the release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants