New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support in tree Autoscaler in ray operator #28
Comments
It would be great to see support for in-tree autoscaling! Are there any API changes to the in-tree autoscaler or proto APIs that might make this easier to implement / maintain? (I'm happy to work together on this issue) |
@ericl We did some analysis and notice it's kind of hard to start monitor and keep it exact same pattern as it is in ray/core. I do think we need some changes to provides a smooth and pluggable experience. Let us add more details in the issue and we can have the discussion |
Cc @DmitriGekhtman, who maintains the in-tree operator. |
@Jeffwan could you say more about why having the autoscaler run in the head pod is preferable for the use-cases you are considering? If I understand right, you'd also prefer the autoscaler to directly interact with K8s api server, rather than acting on a custom resource and delegating pod management to the operator. Just curious if there are particular reasons this way of doing things works best for you, besides the fact that the Ray autoscaler is currently set up to favor this deployment strategy. |
I guess "in-tree autoscaler" mostly means "monitor.py" from the main Ray project. |
@DmitriGekhtman I missed your last comment. We can scope autoscaler at the cluster level which is under our expectation. Since autoscaler in the future may have different policies etc, this gives us enough flexibility to custom autoscaler for each cluster for different ray versions. (we are not end users and version upgrade takes time, it's common to have multiple versions running at the same time in the cluster)
I actually prefer to ask autoscaler to update Kubernetes CRD so there's always one owner of the pods and the responsibility is clear. |
That's correct. We did some POC like below to verify the functionality but feel there're some upstream changes to make. Currently, we are not using autoscaling yet in our envs.
|
All of this makes sense. Mounting a config map works. Another option is to have the autoscaler read the custom resource and do the translation to a suitable format itself, once per autoscaler iteration. This has the advantage that changes to the CR propagate faster to the autoscaler -- mounted config maps take a while to update. |
I wrote a design doc fleshing out the above proposals a bit more: https://docs.google.com/document/d/1I2CYu2-hTQUJ29wPonMvCZgEiRPs1-KeqT1mzrC6LXY Please let us know about the direction and any suggestions or improvements you might have :) |
ray-project/ray#21086 Ray upstream already have the support. Under current implementation, kuberay operator's work become easier, operator should take actions on this field to orchestrate the autoscaler. Entire process should be transparent to users kuberay/ray-operator/api/raycluster/v1alpha1/raycluster_types.go Lines 21 to 22 in ffa7e60
While, version management is still tricky. We should not support autoscaler for earlier Ray versions. |
Yep, I agree that we don't need to support the Ray autoscaler with earlier Ray versions. |
Major implementation is done. Let's create separate issues to track future improvements. |
Controller support to scale in arbitrary pods by following api. This is extremely helpful for user who use out of tree autoscaler.
https://github.com/ray-project/ray-contrib/blob/f4076b4ec5bfae4cea6d9b66a1ec4e63680ca366/ray-operator/api/v1alpha1/raycluster_types.go#L56-L60
In our case, we still like to use in-tree autoscaler. the major differences is
--autoscaling-config
and head pod will start monitor processRayCluster
custom resource to a config file which can be used by in-tree autoscaler. example hereA new field has been reserved in API support this change.
https://github.com/ray-project/ray-contrib/pull/22/files#diff-edc3be4feb67012c143a57fcaefafb4c95e4cd6e661a67bb2ad1da340255bc00R21-R22
The text was updated successfully, but these errors were encountered: