Skip to content

Conversation

@qihqi
Copy link
Collaborator

@qihqi qihqi commented Aug 6, 2024

Output on v5litepod-8:

Number of devices:  8
bfloat16         Matmul replicated:     438.17 ms        sizes: ('2048.0 MiB', '2048.0 MiB')
bfloat16         Matmul sharded colrow:         108.891 ms       sizes: ('2048.0 MiB', '2048.0 MiB')
bfloat16         matmul sharded rowcol:         76.6386 ms       sizes: ('2048.0 MiB', '2048.0 MiB')
bfloat16         all_gather:    68.3381 ms       sizes: ('2048.0 MiB',)
bfloat16         all_reduce:    8.25386 ms       sizes: ('2048.0 MiB',)
bfloat16         Llama 3xffn shardmap:  0.611614 ms      sizes: ('0.0625 MiB', '86.0 MiB', '86.0 MiB', '86.0 MiB')
bfloat16         Llama 3xffn gspmd:     0.596578 ms      sizes: ('0.0625 MiB', '86.0 MiB', '86.0 MiB', '86.0 MiB')
int8     Matmul replicated:     186.436 ms       sizes: ('1024.0 MiB', '1024.0 MiB')
int8     Matmul sharded colrow:         54.9044 ms       sizes: ('1024.0 MiB', '1024.0 MiB')
int8     matmul sharded rowcol:         38.6539 ms       sizes: ('1024.0 MiB', '1024.0 MiB')
int8     all_gather:    34.4571 ms       sizes: ('1024.0 MiB',)
int8     all_reduce:    4.34715 ms       sizes: ('1024.0 MiB',)
int8     Llama 3xffn shardmap:  0.483992 ms      sizes: ('0.03125 MiB', '43.0 MiB', '43.0 MiB', '43.0 MiB')
int8     Llama 3xffn gspmd:     0.503814 ms      sizes: ('0.03125 MiB', '43.0 MiB', '43.0 MiB', '43.0 MiB')

@qihqi qihqi requested review from FanhaiLu1, lsy323 and wang2yn84 August 6, 2024 22:24



allcases = [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! Thanks for adding performance measurement for the ops!

@qihqi qihqi merged commit 1e08833 into main Aug 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants