-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor-parallel communication overlap with userbuffer backend #6362
Conversation
b81c170
to
609d0b1
Compare
for more information, see https://pre-commit.ci
ub_tp_comm_overlap_cfg: null | ||
# A yaml file with userbuffer communicator configurations. This file should provide `method`, `dtype`, `num_sm`, `num_splits`, | ||
# `cga_size`, `num_splits`, `set_sm_margin`, and `aggregate` for the communicators to use custom settings. | ||
# If the configuration file is not provided a default setting is used for all communicators. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a yaml file? Should it just be
ub_tp_comm_overlap_cfg:
method: blah
dtype: blah
...
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The two choices were either a config object or config file since there are two many UB related args
logging.info( | ||
"Userbuffer tensor-parallel communication overlap is available with both Transformer Engine and sequence-parallelism." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be .warning
and maybe add ".. only available with ... Setting ub_tp_comm_overlap to True"
# Initialize userbuffer communicators. Initialization is done only once at the | ||
# beginning of the first training step. | ||
if self.initialize_ub: | ||
input_shape = [ | ||
self.cfg.get('encoder_seq_length') * self.cfg.get('micro_batch_size'), | ||
self.cfg.get('hidden_size'), | ||
] | ||
ub_cfg_file_name = self.cfg.get('ub_tp_comm_overlap_cfg', None) | ||
if ub_cfg_file_name is not None: | ||
try: | ||
import yaml | ||
|
||
with open(ub_cfg_file_name, 'r') as ub_cfg_file: | ||
ub_cfgs = yaml.safe_load(ub_cfg_file) | ||
except (ImportError, TypeError): | ||
print("Fail to read ub_tp_comm_overlap config file.") | ||
else: | ||
ub_cfgs = None | ||
te_module.initialize_ub( | ||
shape=input_shape, | ||
tp_size=self.cfg.get('tensor_model_parallel_size'), | ||
use_fp8=self.cfg.get('fp8'), | ||
ub_cfgs=ub_cfgs, | ||
) | ||
self.initialize_ub = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this go in the .setup
method then? (since it is only called once at beginning of training)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, can we make it a private method and then call the it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To add; I'm not a fan of how we're importing the whole file from TE (which isn't a part of the API)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, just a few minor comments.
Also, do we need new apex and te commits for this PR?
Signed-off-by: Sangkug Lym <slym@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
* add interfaces for tp_communication overlap * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Interface to provide custom userbuffer communicator settings by yaml file * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Jenkinsfile Signed-off-by: Sangkug Lym <slym@nvidia.com> --------- Signed-off-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eric Harper <complex451@gmail.com>
What does this PR do ?
Add (1) interfaces to TE and initialized (2) process group setting to support tensor-parallel communication overlap with userbuffer backend.
Changelog
Usage
Set
ub_tp_comm_overlap
toTrue
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information