[RayTrain] Checkpoint API to recover from checkpoint from previous runs #45516
Labels
enhancement
Request for new feature and/or capability
train
Ray Train Related Issue
triage
Needs triage (eg: priority, bug/not-bug, and owning component)
Description
Steps to reproduce
There are examples that illustrate checkpointing and recovering from checkpointing in the Ray training frameworks. One such example illustrates how to configure checkpointing to a pytorch training job.
1. Trigger the training RayJob
2. Kill the head pod
Let the training job make a couple of checkpoints and then kill the head pod.
3. The new driver ignores the checkpoint
The current driver pod errors out and a new driver pod gets created. The new driver pod runs the training job again from scratch ignoring the checkpoints produced in the last run.
Hacky Fix
To overcome this problem, we have to write a function with a tightly coupled logic. For example, look at the function
findLatestCheckpoint
in this job definition.Use case
It would be great if we have an API that we can call and get the latest checkpoint location for the previous iteration of the given run.
The text was updated successfully, but these errors were encountered: