Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add serve ft nightly #19125

Merged
merged 8 commits into from Oct 12, 2021
Merged

Conversation

jiaodong
Copy link
Member

@jiaodong jiaodong commented Oct 6, 2021

Create a serve FT test for both local testing as well as product nightly.

It uses s3://serve-nightly-tests/fault-tolerant-test-checkpoint as checkpoint path with 7 days TTL, each checkpoint is 260 Bytes in size.

In each test run, it will try to run test in uuid4 namespace with unique storage key, go through deploy() -> kill controller & ray & cluster -> resume in same namespace -> check endpoints availability process, then attempt to clean up checkpoint file and exit.

Works for both local disk and s3 path, since we don't have sufficient infra for multi cluster yet, both local and product tests will be using the same cluster_utils to run local cluster, except using different storage paths.

Checks

  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@jiaodong jiaodong added the serve Ray Serve Related Issue label Oct 6, 2021
python/ray/serve/storage/checkpoint_path.py Show resolved Hide resolved
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
aws_session_token=aws_session_token)
aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID", None),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think previously we're not really getting these tokens in make_kv_store, so adding it at client level with actual s3 integration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

boto should fall back to environ if we pass in None, so this is not necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump on this

python/ray/serve/storage/checkpoint_path.py Show resolved Hide resolved
Comment on lines +5 to +7
For product testing, we skip the part of actually starting new cluster as
it's Job Manager's responsibility, and only re-deploy to the same cluster
with remote checkpoint.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think we should still fully test that path where the entire cluster goes down. If we just test the controller being killed there may be some hidden assumptions about state in the GCS or something.

Can you just use cluster_utils and blow away the whole cluster and create a new one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to completely start new ray cluster if possible as well, but think cluster_utils is for local tests, is there an equivalent on product to blow away current cluster and start new one ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'm not sure. cluster_utils would be a good first step, it at least clears all process-level state

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ask internally on slack if there's a best way to do this? Worst case you just use the SDK to kill the cluster and start a new one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cluster_utils will run much faster though and it'll be nice to be able to run it locally for testing. if possible maybe make one test with both "backends" and start with just cluster_utils?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i asked a few product folks, our SDK has start / terminate cluster APIs but the issues is current releaser/e2e.py is assuming it runs a single script as if on laptop, in single ray cluster. Doing this in our test script might not work well and most feasible way is probably extend existing test script's yaml field to extend a multi-cluster setup.

For now im trying to work with multiple local clusters on product, which should terminate quick but also give us more coverage compare to current revision.

@simon-mo simon-mo requested a review from edoakes October 11, 2021 16:52
Copy link
Contributor

@simon-mo simon-mo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiaodong please make sure to add them to preset dashboard once they are running!

@simon-mo simon-mo merged commit 85b8a6d into ray-project:master Oct 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
serve Ray Serve Related Issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants