Skip to content

Latest commit

 

History

History
729 lines (504 loc) · 24.5 KB

placement-group.rst

File metadata and controls

729 lines (504 loc) · 24.5 KB

Placement Groups

Placement groups allow users to atomically reserve groups of resources across multiple nodes (i.e., gang scheduling). They can be then used to schedule Ray tasks and actors to be packed as close as possible for locality (PACK), or spread apart (SPREAD).

Java demo code in this documentation can be found here https://github.com/ray-project/ray/blob/master/java/test/src/main/java/io/ray/docdemo/PlacementGroupDemo.java.

Here are some use cases:

  • Gang Scheduling: Your application requires all tasks/actors to be scheduled and start at the same time.
  • Maximizing data locality: You'd like to place or schedule your tasks and actors close to your data to avoid object transfer overheads.
  • Load balancing: To improve application availability and avoid resource overload, you'd like to place your actors or tasks into different physical machines as much as possible.

To learn more about production use cases, check out the examples <ray-placement-group-examples-ref>.

Key Concepts

A bundle is a collection of "resources", i.e. {"GPU": 4}.

  • A bundle must be able to fit on a single node on the Ray cluster.
  • Bundles are then placed according to the "placement group strategy" across nodes on the cluster.

A placement group is a collection of bundles.

  • Each bundle is given an "index" within the placement group
  • Bundles are then placed according to the "placement group strategy" across nodes on the cluster.
  • After the placement group is created, tasks or actors can be then scheduled according to the placement group and even on individual bundles.

A placement group strategy is an algorithm for selecting nodes for bundle placement. Read more about placement strategies <pgroup-strategy>.

Starting a placement group

Ray placement group can be created via the ray.util.placement_group (Python) or PlacementGroups.createPlacementGroup (Java) API. Placement groups take in a list of bundles and a placement strategy <pgroup-strategy>:

Python

# Import placement group APIs.
from ray.util.placement_group import (
    placement_group,
    placement_group_table,
    remove_placement_group
)

# Initialize Ray.
import ray
ray.init(num_gpus=2, resources={"extra_resource": 2})

bundle1 = {"GPU": 2}
bundle2 = {"extra_resource": 2}

pg = placement_group([bundle1, bundle2], strategy="STRICT_PACK")

Java

// Initialize Ray.
Ray.init();

// Construct a list of bundles.
Map<String, Double> bundle = ImmutableMap.of("CPU", 1.0);
List<Map<String, Double>> bundles = ImmutableList.of(bundle);

// Make a creation option with bundles and strategy.
PlacementGroupCreationOptions options =
  new PlacementGroupCreationOptions.Builder()
    .setBundles(bundles)
    .setStrategy(PlacementStrategy.STRICT_SPREAD)
    .build();

PlacementGroup pg = PlacementGroups.createPlacementGroup(options);

Important

Each bundle must be able to fit on a single node on the Ray cluster.

Placement groups are atomically created - meaning that if there exists a bundle that cannot fit in any of the current nodes, then the entire placement group will not be ready.

Python

# Wait until placement group is created.
ray.get(pg.ready())

# You can also use ray.wait.
ready, unready = ray.wait([pg.ready()], timeout=0)

# You can look at placement group states using this API.
print(placement_group_table(pg))

Java

// Wait for the placement group to be ready within the specified time(unit is seconds).
boolean ready = pg.wait(60);
Assert.assertTrue(ready);

// You can look at placement group states using this API.
List<PlacementGroup> allPlacementGroup = PlacementGroups.getAllPlacementGroups();
for (PlacementGroup group: allPlacementGroup) {
  System.out.println(group);
}

Infeasible placement groups will be pending until resources are available. The Ray Autoscaler will be aware of placement groups, and auto-scale the cluster to ensure pending groups can be placed as needed.

Strategy types

Ray currently supports the following placement group strategies:

STRICT_PACK

All bundles must be placed into a single node on the cluster.

PACK

All provided bundles are packed onto a single node on a best-effort basis. If strict packing is not feasible (i.e., some bundles do not fit on the node), bundles can be placed onto other nodes nodes.

STRICT_SPREAD

Each bundle must be scheduled in a separate node.

SPREAD

Each bundle will be spread onto separate nodes on a best effort basis. If strict spreading is not feasible, bundles can be placed overlapping nodes.

Quick Start

Let's see an example of using placement group. Note that this example is done within a single node.

import ray
from pprint import pprint

# Import placement group APIs.
from ray.util.placement_group import (
    placement_group,
    placement_group_table,
    remove_placement_group
)

ray.init(num_gpus=2, resources={"extra_resource": 2})

Let's create a placement group. Recall that each bundle is a collection of resources, and tasks or actors can be scheduled on each bundle.

Note

When specifying bundles,

  • "CPU" will correspond with num_cpus as used in ray.remote
  • "GPU" will correspond with num_gpus as used in ray.remote
  • Other resources will correspond with resources as used in ray.remote.

Once the placement group reserves resources, original resources are unavailable until the placement group is removed. For example:

Python

# Two "CPU"s are available.
ray.init(num_cpus=2)

# Create a placement group.
pg = placement_group([{"CPU": 2}])
ray.get(pg.ready())

# Now, 2 CPUs are not available anymore because they are pre-reserved by the placement group.
@ray.remote(num_cpus=2)
def f():
    return True

# Won't be scheduled because there are no 2 cpus.
f.remote()

# Will be scheduled because 2 cpus are reserved by the placement group.
f.options(placement_group=pg).remote()

Java

System.setProperty("ray.head-args.0", "--num-cpus=2");
Ray.init();

public static class Counter {
  public static String ping() {
    return "pong";
  }
}

// Construct a list of bundles.
Map<String, Double> bundle = ImmutableMap.of("CPU", 2.0);
List<Map<String, Double>> bundles = ImmutableList.of(bundle);

// Create a placement group and make sure its creation is successful.
PlacementGroupCreationOptions options =
  new PlacementGroupCreationOptions.Builder()
    .setBundles(bundles)
    .setStrategy(PlacementStrategy.STRICT_SPREAD)
    .build();

PlacementGroup pg = PlacementGroups.createPlacementGroup(options);
boolean isCreated = pg.wait(60);
Assert.assertTrue(isCreated);

// Won't be scheduled because there are no 2 cpus now.
ObjectRef<String> obj = Ray.task(Counter::ping)
  .setResource("CPU", 2.0)
  .remote();

List<ObjectRef<String>> waitList = ImmutableList.of(obj);
WaitResult<String> waitResult = Ray.wait(waitList, 1, 5 * 1000);
Assert.assertEquals(1, waitResult.getUnready().size());

// Will be scheduled because 2 cpus are reserved by the placement group.
obj = Ray.task(Counter::ping)
  .setPlacementGroup(pg, 0)
  .setResource("CPU", 2.0)
  .remote();
Assert.assertEquals(obj.get(), "pong");

C++

Note

When using placement groups, it is recommended to verify that their placement groups are ready (by calling ray.get(pg.ready())) and have the proper resources. Ray assumes that the placement group will be properly created and does not print a warning about infeasible tasks.

Python

gpu_bundle = {"GPU": 2}
extra_resource_bundle = {"extra_resource": 2}

# Reserve bundles with strict pack strategy.
# It means Ray will reserve 2 "GPU" and 2 "extra_resource" on the same node (strict pack) within a Ray cluster.
# Using this placement group for scheduling actors or tasks will guarantee that they will
# be colocated on the same node.
pg = placement_group([gpu_bundle, extra_resource_bundle], strategy="STRICT_PACK")

# Wait until placement group is created.
ray.get(pg.ready())

Java

Map<String, Double> bundle1 = ImmutableMap.of("GPU", 2.0);
Map<String, Double> bundle2 = ImmutableMap.of("extra_resource", 2.0);
List<Map<String, Double>> bundles = ImmutableList.of(bundle1, bundle2);

/**
 * Reserve bundles with strict pack strategy.
 * It means Ray will reserve 2 "GPU" and 2 "extra_resource" on the same node (strict pack) within a Ray cluster.
 * Using this placement group for scheduling actors or tasks will guarantee that they will
 * be colocated on the same node.
 */
PlacementGroupCreationOptions options =
  new PlacementGroupCreationOptions.Builder()
    .setBundles(bundles)
    .setStrategy(PlacementStrategy.STRICT_PACK)
    .build();

PlacementGroup pg = PlacementGroups.createPlacementGroup(options);
boolean isCreated = pg.wait(60);
Assert.assertTrue(isCreated);

Now let's define an actor that uses GPU. We'll also define a task that use extra_resources.

Python

@ray.remote(num_gpus=1)
class GPUActor:
  def __init__(self):
    pass

@ray.remote(resources={"extra_resource": 1})
def extra_resource_task():
  import time
  # simulate long-running task.
  time.sleep(10)

# Create GPU actors on a gpu bundle.
gpu_actors = [GPUActor.options(
    placement_group=pg,
    # This is the index from the original list.
    # This index is set to -1 by default, which means any available bundle.
    placement_group_bundle_index=0) # Index of gpu_bundle is 0.
.remote() for _ in range(2)]

# Create extra_resource actors on a extra_resource bundle.
extra_resource_actors = [extra_resource_task.options(
    placement_group=pg,
    # This is the index from the original list.
    # This index is set to -1 by default, which means any available bundle.
    placement_group_bundle_index=1) # Index of extra_resource_bundle is 1.
.remote() for _ in range(2)]

Java

public static class Counter {
  private int value;

  public Counter(int initValue) {
    this.value = initValue;
  }

  public int getValue() {
    return value;
  }

  public static String ping() {
    return "pong";
  }
}

// Create GPU actors on a gpu bundle.
for (int index = 0; index < 2; index++) {
  Ray.actor(Counter::new, 1)
    .setResource("GPU", 1.0)
    .setPlacementGroup(pg, 0)
    .remote();
}

// Create extra_resource actors on a extra_resource bundle.
for (int index = 0; index < 2; index++) {
  Ray.task(Counter::ping)
    .setPlacementGroup(pg, 1)
    .setResource("extra_resource", 1.0)
    .remote().get();
}

Now, you can guarantee all gpu actors and extra_resource tasks are located on the same node because they are scheduled on a placement group with the STRICT_PACK strategy.

Note

In order to fully utilize resources pre-reserved by the placement group, Ray automatically schedules children tasks/actors to the same placement group as its parent.

Python

# Create a placement group with the STRICT_SPREAD strategy.
pg = placement_group([{"CPU": 2}, {"CPU": 2}], strategy="STRICT_SPREAD")
ray.get(pg.ready())

@ray.remote
def child():
    pass

@ray.remote
def parent():
    # The child task is scheduled with the same placement group as its parent
    # although child.options(placement_group=pg).remote() wasn't called.
    ray.get(child.remote())

ray.get(parent.options(placement_group=pg).remote())

To avoid it, you should specify options(placement_group=None) in a child task/actor remote call.

@ray.remote
def parent():
    # In this case, the child task won't be
    # scheduled with the parent's placement group.
    ray.get(child.options(placement_group=None).remote())

Java

It's not implemented for Java APIs yet.

You can remove a placement group at any time to free its allocated resources.

Python

# This API is asynchronous.
remove_placement_group(pg)

# Wait until placement group is killed.
import time
time.sleep(1)
# Check the placement group has died.
pprint(placement_group_table(pg))

"""
{'bundles': {0: {'GPU': 2.0}, 1: {'extra_resource': 2.0}},
'name': 'unnamed_group',
'placement_group_id': '40816b6ad474a6942b0edb45809b39c3',
'state': 'REMOVED',
'strategy': 'STRICT_PACK'}
"""

ray.shutdown()

Java

PlacementGroups.removePlacementGroup(placementGroup.getId());

PlacementGroup removedPlacementGroup = PlacementGroups.getPlacementGroup(placementGroup.getId());
Assert.assertEquals(removedPlacementGroup.getState(), PlacementGroupState.REMOVED);

Named Placement Groups

A placement group can be given a globally unique name. This allows you to retrieve the placement group from any job in the Ray cluster. This can be useful if you cannot directly pass the placement group handle to the actor or task that needs it, or if you are trying to access a placement group launched by another driver. Note that the placement group will still be destroyed if it's lifetime isn't detached. See placement-group-lifetimes for more details.

Python

# first_driver.py
# Create a placement group with a global name.
pg = placement_group([{"CPU": 2}, {"CPU": 2}], strategy="STRICT_SPREAD", lifetime="detached", name="global_name")
ray.get(pg.ready())

Then, we can retrieve the actor later somewhere.

# second_driver.py
# Retrieve a placement group with a global name.
pg = ray.util.get_placement_group("global_name")

Java

// Create a placement group with a unique name.
Map<String, Double> bundle = ImmutableMap.of("CPU", 1.0);
List<Map<String, Double>> bundles = ImmutableList.of(bundle);

PlacementGroupCreationOptions options =
  new PlacementGroupCreationOptions.Builder()
    .setBundles(bundles)
    .setStrategy(PlacementStrategy.STRICT_SPREAD)
    .setName("global_name")
    .build();

PlacementGroup pg = PlacementGroups.createPlacementGroup(options);
pg.wait(60);

...

// Retrieve the placement group later somewhere.
PlacementGroup group = PlacementGroups.getPlacementGroup("global_name");
Assert.assertNotNull(group);

C++

We also support non-global named placement group in C++, which means that the placement group name is only valid within the job and cannot be accessed from another job.

Placement Group Lifetimes

Python

By default, the lifetimes of placement groups are not detached and will be destroyed when the driver is terminated (but, if it is created from a detached actor, it is killed when the detached actor is killed). If you'd like to keep the placement group alive regardless of its job or detached actor, you should specify lifetime="detached". For example:

# first_driver.py
pg = placement_group([{"CPU": 2}, {"CPU": 2}], strategy="STRICT_SPREAD", lifetime="detached")
ray.get(pg.ready())

The placement group's lifetime will be independent of the driver now. This means it is possible to retrieve the placement group from other drivers regardless of when the current driver exits. Let's see an example:

# second_driver.py
table = ray.util.placement_group_table()
print(len(table))

Note that the lifetime option is decoupled from the name. If we only specified the name without specifying lifetime="detached", then the placement group can only be retrieved as long as the original driver is still running.

Java

The lifetime argument is not implemented for Java APIs yet.

Tips for Using Placement Groups

  • Learn the lifecycle <ray-placement-group-lifecycle-ref> of placement groups.
  • Learn the fault tolerance <ray-placement-group-ft-ref> of placement groups.
  • See more examples <ray-placement-group-examples-ref> to learn real world use cases of placement group APIs.

Lifecycle

Creation: When placement groups are first created, the request is sent to the GCS. The GCS sends resource reservation requests to nodes based on its scheduling strategy. Ray guarantees placement groups are placed atomically.

Autoscaling: Placement groups are pending creation if there are no nodes that can satisfy resource requirements for a given strategy. The Ray Autoscaler will be aware of placement groups, and auto-scale the cluster to ensure pending groups can be placed as needed.

Cleanup: Placement groups are automatically removed when the job that created the placement group is finished. The only exception is that it is created by detached actors. In this case, placement groups fate-share with the detached actors.

Fault Tolerance

If nodes that contain some bundles of a placement group die, all the bundles will be rescheduled on different nodes by GCS. This means that the initial creation of placement group is "atomic", but once it is created, there could be partial placement groups.

Placement groups are tolerant to worker nodes failures (bundles on dead nodes are rescheduled). However, placement groups are currently unable to tolerate head node failures (GCS failures), which is a single point of failure of Ray.

API Reference

Placement Group API reference <ray-placement-group-ref>