-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[serve] Add replica placement group support #37830
Conversation
cc @Yard1 |
…ica-pg-support
…ica-pg-support
Signed-off-by: Edward Oakes <ed.nmi.oakes@gmail.com>
@jjyao this is ready for review (note there is a little bit of cleanup pending as noted in the description). |
…ica-pg-support
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall lgtm
# Placement group bundles and strategy *for this replica*. | ||
# These are optional: by default replicas do not have a placement group. | ||
placement_group_bundles: Optional[List[Dict[str, float]]] = None | ||
placement_group_strategy: Optional[str] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can create a PlacementGroupDeploymentSchedulingPolicy
that contains placement_group_bundles
and placement_group_strategy
and register this policy during on_deployment_created
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought more about this and actually I don't think it changes anything related to the DeploymentSchedulingPolicy
. The placement group is only relevant to each replica itself. We probably still want to SPREAD
the different placement groups among each other like the existing policy (and maintain things like max_replicas_per_node
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea makes sense. Currently there is no way to spread PGs.
If you think it's a valid case, do you mind filing an enhancement issue so I can track on my side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work so far! Most of my suggestions are nits to improve docstrings.
I left a few questions about co-scheduling Serve replicas with Ray actors/tasks using PGs, but based on the unit tests those don't seem as relevant. Feel free to resolve them if they're not.
Signed-off-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Signed-off-by: Edward Oakes <ed.nmi.oakes@gmail.com>
…ica-pg-support
@@ -76,9 +82,18 @@ def requires_long_poll_broadcast(self, new_version): | |||
) | |||
|
|||
def compute_hashes(self): | |||
# If this changes, the controller will directly restart all existing replicas. | |||
# If these change, the controller will rolling upgrade existing replicas. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my knowledge, is it from core requirement or some other considerations we have to do rolling upgrade after the placement group change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can't modify a placement group in-place, need to remove it and create another (similar to changing the actor resource requirement)
["serve", "status", "-a", "http://localhost:52365/"] | ||
) | ||
status = yaml.safe_load(cli_output)["applications"] | ||
# TODO(zcin): fix error handling in the application state manager for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zcin here is the issue I discussed w/ you on slack. Plan to merge this as-is and you can take it as a follow-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! This change looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm for deployment state and scheduler parts.
# Placement group bundles and strategy *for this replica*. | ||
# These are optional: by default replicas do not have a placement group. | ||
placement_group_bundles: Optional[List[Dict[str, float]]] = None | ||
placement_group_strategy: Optional[str] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea makes sense. Currently there is no way to spread PGs.
If you think it's a valid case, do you mind filing an enhancement issue so I can track on my side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! Two question (non-blocker)
- Do we add the placement group info from serve status?
- Should we add a test spawning a pure ray actor or ray task from serve replica, to make sure they are under the same pg? (If already had, can ignore)
@sihanwang41 yes we should, I can file a follow-up one for that. And yeah I'm verifying we get spawned in the right PG but not actually spawning a task/actor -- I'll add that. |
…ica-pg-support
serve.run(Infeasible.bind()) | ||
|
||
|
||
def test_coschedule_actors_and_tasks(serve_instance): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sihanwang41 added this test you asked about
Signed-off-by: Edward Oakes <ed.nmi.oakes@gmail.com>
…ica-pg-support
Signed-off-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Adds support for `placement_group_strategy` and `placement_group_policy` to the deployment config. This enables creating a placement group _per replica_ of a deployment which is a feature request from users orchestrating multiple actors within a replica (e.g., to perform model-parallel inference). The replica actor will be created in the bundle with index `0` (following the precedent set in Ray Train and Ray Tune). Signed-off-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Adds support for `placement_group_strategy` and `placement_group_policy` to the deployment config. This enables creating a placement group _per replica_ of a deployment which is a feature request from users orchestrating multiple actors within a replica (e.g., to perform model-parallel inference). The replica actor will be created in the bundle with index `0` (following the precedent set in Ray Train and Ray Tune). Signed-off-by: NripeshN <nn2012@hw.ac.uk>
Adds support for `placement_group_strategy` and `placement_group_policy` to the deployment config. This enables creating a placement group _per replica_ of a deployment which is a feature request from users orchestrating multiple actors within a replica (e.g., to perform model-parallel inference). The replica actor will be created in the bundle with index `0` (following the precedent set in Ray Train and Ray Tune). Signed-off-by: harborn <gangsheng.wu@intel.com>
Adds support for `placement_group_strategy` and `placement_group_policy` to the deployment config. This enables creating a placement group _per replica_ of a deployment which is a feature request from users orchestrating multiple actors within a replica (e.g., to perform model-parallel inference). The replica actor will be created in the bundle with index `0` (following the precedent set in Ray Train and Ray Tune).
Adds support for `placement_group_strategy` and `placement_group_policy` to the deployment config. This enables creating a placement group _per replica_ of a deployment which is a feature request from users orchestrating multiple actors within a replica (e.g., to perform model-parallel inference). The replica actor will be created in the bundle with index `0` (following the precedent set in Ray Train and Ray Tune). Signed-off-by: e428265 <arvind.chandramouli@lmco.com>
Adds support for `placement_group_strategy` and `placement_group_policy` to the deployment config. This enables creating a placement group _per replica_ of a deployment which is a feature request from users orchestrating multiple actors within a replica (e.g., to perform model-parallel inference). The replica actor will be created in the bundle with index `0` (following the precedent set in Ray Train and Ray Tune). Signed-off-by: Victor <vctr.y.m@example.com>
Why are these changes needed?
Adds support for
placement_group_strategy
andplacement_group_policy
to the deployment config. This enables creating a placement group per replica of a deployment which is a feature request from users orchestrating multiple actors within a replica (e.g., to perform model-parallel inference).The replica actor will be created in the bundle with index
0
(following the precedent set in Ray Train and Ray Tune).TODO before merging:
test_deployment_scheduler
.Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.