Skip to content

Commit

Permalink
[examples] polish AutoParallel readme (#3270)
Browse files Browse the repository at this point in the history
  • Loading branch information
YuliangLiu0306 committed Mar 28, 2023
1 parent 02b0580 commit fd6add5
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
1 change: 1 addition & 0 deletions examples/tutorial/auto_parallel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ colossalai run --nproc_per_node 4 auto_parallel_with_resnet.py
You should expect to the log like this. This log shows the edge cost on the computation graph as well as the sharding strategy for an operation. For example, `layer1_0_conv1 S01R = S01R X RR` means that the first dimension (batch) of the input and output is sharded while the weight is not sharded (S means sharded, R means replicated), simply equivalent to data parallel training.
![](https://raw.githubusercontent.com/hpcaitech/public_assets/main/examples/tutorial/auto-parallel%20demo.png)

**Note: This experimental feature has been tested on torch 1.12.1 and transformer 4.22.2. If you are using other versions, you may need to modify the code to make it work.**

### Auto-Checkpoint Tutorial

Expand Down
4 changes: 2 additions & 2 deletions examples/tutorial/auto_parallel/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
torch
torch==1.12.1
colossalai
titans
pulp
datasets
matplotlib
transformers
transformers==4.22.1

0 comments on commit fd6add5

Please sign in to comment.