-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
转换规则 No.155-158 #265
转换规则 No.155-158 #265
Conversation
Thanks for your contribution! |
没有使用tests/test_调用,paconvert/api_matcher.py 在 PR-CI-Coverage 覆盖测试会失败 |
paconvert/api_matcher.py
Outdated
@@ -682,6 +682,18 @@ def generate_code(self, kwargs): | |||
return code | |||
|
|||
|
|||
class ReduceOpMatcher(BaseMatcher): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是不是可以复用 Func2Attribute
这个Matcher
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
|
||
if [ $# -gt 0 ] ; then | ||
item=$1 | ||
torchrun --nproc_per_node=2 ${item} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个torchrun是?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
原调用方式是python -m torch.distributed.lanuch
torchrun是较新版pytorch调用方式 https://pytorch.org/docs/stable/elastic/run.html
|
||
test_list="scatter.py reduce_scatter.py scatter_object_list.py all_to_all.py ReduceOp.py" | ||
for i in $test_list; do | ||
torchrun --nproc_per_node=2 ${i} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个torchrun是?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
原调用方式是python -m torch.distributed.lanuch
torchrun是较新版pytorch调用方式 https://pytorch.org/docs/stable/elastic/run.html
覆盖率可以豁免 |
|
||
export CUDA_VISIBLE_DEVICES=0,1 | ||
|
||
if [ $# -gt 0 ] ; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
建议后面可以把 run.sh
直接集成到 convert.sh
中
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改PR #270
@@ -0,0 +1,39 @@ | |||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
后面可以把这些文件命名规范一下:all_to_all.py -> test_all_to_all.py
等
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改PR #270
@co63oc 先合入了,一些命名之类问题可以下个PR再改 |
PR Docs
#112
torch.distributed.* API需要使用2张以上GPU,在线上测试环境不能测试,改为增加目录tests/distributed,可以本地在tests/distributed目录下运行convert.sh转换,然后运行run.sh测试
155 torch.distributed.scatter 已有文档验证无误
156 torch.distributed.scatter_object_list PaddlePaddle/docs#6046
157 torch.distributed.reduce_scatter PaddlePaddle/docs#6046
158 torch.distributed.all_to_all 已有文档验证无误
PR APIs