From 3401987a6a99982796ed303d309bf6ab1d1a76cd Mon Sep 17 00:00:00 2001 From: kwonmha Date: Fri, 10 Nov 2023 15:34:26 +0900 Subject: [PATCH 1/2] specify config file path --- examples/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/examples/README.md b/examples/README.md index 06e06db0f8c..d5dff54b8dd 100644 --- a/examples/README.md +++ b/examples/README.md @@ -150,7 +150,7 @@ To run it in each of these various modes, use the following commands: * With Accelerate config and launcher ```bash accelerate config # This will create a config file on your server - accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on your server + accelerate launch --config_file generated_config_file ./cv_example.py --data_dir path_to_data # This will run the script on your server ``` * With traditional PyTorch launcher (`torch.distributed.launch` can be used with older versions of PyTorch) ```bash @@ -160,7 +160,7 @@ To run it in each of these various modes, use the following commands: * With Accelerate config and launcher, on each machine: ```bash accelerate config # This will create a config file on each server - accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on each server + accelerate launch --config_file generated_config_file ./cv_example.py --data_dir path_to_data # This will run the script on each server ``` * With PyTorch launcher only (`torch.distributed.launch` can be used with older versions of PyTorch) ```bash @@ -179,7 +179,7 @@ To run it in each of these various modes, use the following commands: * With Accelerate config and launcher ```bash accelerate config # This will create a config file on your TPU server - accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on each server + accelerate launch --config_file generated_config_file ./cv_example.py --data_dir path_to_data # This will run the script on each server ``` * In PyTorch: Add an `xmp.spawn` line in your script as you usually do. From 30a50e1e304b331e516ccbc660a9e6489ecc0b6a Mon Sep 17 00:00:00 2001 From: kwonmha Date: Mon, 13 Nov 2023 13:25:26 +0900 Subject: [PATCH 2/2] set the path of generated config file for configuring and executing commands --- examples/README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/examples/README.md b/examples/README.md index d5dff54b8dd..f525607aad3 100644 --- a/examples/README.md +++ b/examples/README.md @@ -149,8 +149,8 @@ To run it in each of these various modes, use the following commands: - multi GPUs (using PyTorch distributed mode) * With Accelerate config and launcher ```bash - accelerate config # This will create a config file on your server - accelerate launch --config_file generated_config_file ./cv_example.py --data_dir path_to_data # This will run the script on your server + accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml` + accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on your server ``` * With traditional PyTorch launcher (`torch.distributed.launch` can be used with older versions of PyTorch) ```bash @@ -159,8 +159,8 @@ To run it in each of these various modes, use the following commands: - multi GPUs, multi node (several machines, using PyTorch distributed mode) * With Accelerate config and launcher, on each machine: ```bash - accelerate config # This will create a config file on each server - accelerate launch --config_file generated_config_file ./cv_example.py --data_dir path_to_data # This will run the script on each server + accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml` + accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on each server ``` * With PyTorch launcher only (`torch.distributed.launch` can be used with older versions of PyTorch) ```bash @@ -178,8 +178,8 @@ To run it in each of these various modes, use the following commands: - (multi) TPUs * With Accelerate config and launcher ```bash - accelerate config # This will create a config file on your TPU server - accelerate launch --config_file generated_config_file ./cv_example.py --data_dir path_to_data # This will run the script on each server + accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml` + accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on each server ``` * In PyTorch: Add an `xmp.spawn` line in your script as you usually do.