From 7c3d1830a894ccb3c7a92a1acf87f8d4fc053c00 Mon Sep 17 00:00:00 2001 From: Zach Mueller Date: Tue, 6 Feb 2024 12:49:01 -0500 Subject: [PATCH] Try spacing maybe? --- docs/source/usage_guides/distributed_inference.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/source/usage_guides/distributed_inference.md b/docs/source/usage_guides/distributed_inference.md index 7ced2e9fddc..8ed896c8fb5 100644 --- a/docs/source/usage_guides/distributed_inference.md +++ b/docs/source/usage_guides/distributed_inference.md @@ -196,8 +196,11 @@ model = prepare_pippy(model, example_args=(input,)) There are a variety of parameters you can pass through to `prepare_pippy`: - * `split_points` lets you determine what layers to split the model at. By default we use wherever `device_map="auto" declares, such as `fc` or `conv1`. + + * `split_points` lets you determine what layers to split the model at. By default we use wherever `device_map="auto" declares, such as `fc` or `conv1`. + * `num_chunks` determines how the batch will be split and sent to the model itself (so `num_chunks=1` with four split points/four GPUs will have a naive MP where a single input gets passed between the four layer split points) + From here, all that's left is to actually perform the distributed inference!