-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fix][OSS] adding an assert for empty shards + corresponding unit test #406
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -234,8 +234,7 @@ def test_add_param_group(): | |
if torch.cuda.is_available() and torch.cuda.device_count() < world_size: | ||
world_size = min(world_size, torch.cuda.device_count()) | ||
|
||
temp_file_name = tempfile.mkstemp()[1] | ||
mp.spawn(run_test_add_param_group, args=(world_size, temp_file_name), nprocs=world_size, join=True) | ||
mp.spawn(run_test_add_param_group, args=(world_size, tempfile.mkstemp()[1]), nprocs=world_size, join=True) | ||
|
||
|
||
def run_test_zero_grad(rank, world_size, tempfile_name): | ||
|
@@ -263,6 +262,21 @@ def test_zero_grad(): | |
mp.spawn(run_test_zero_grad, args=(world_size, temp_file_name), nprocs=world_size, join=True) | ||
|
||
|
||
def run_test_catch_empty_shardd(rank, world_size, tempfile_name): | ||
dist_init(rank, world_size, tempfile_name, backend="gloo") | ||
m = torch.nn.Linear(1, 1) | ||
with pytest.raises(AssertionError): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nice test |
||
_ = optim.OSS(m.parameters(), lr=0.1) | ||
|
||
dist.destroy_process_group() | ||
|
||
|
||
def test_empty_shard(): | ||
world_size = 4 | ||
|
||
mp.spawn(run_test_catch_empty_shardd, args=(world_size, tempfile.mkstemp()[1]), nprocs=world_size, join=True) | ||
|
||
|
||
def run_test_step(rank, world_size, tempfile_name): | ||
dist_init(rank, world_size, tempfile_name, backend="gloo") | ||
x = torch.tensor([float(rank + 1)], device=rank) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -37,7 +37,7 @@ def _test_basic_func(rank, world_size, tempfile_name, test_case, oss, model=None | |
_dist_init(rank, world_size, tempfile_name, backend="nccl") | ||
|
||
if model is None: | ||
model = Linear(2, 2, bias=False) | ||
model = Linear(2, 2) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. you need the bias or otherwise there isn't enough params? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, exactly |
||
model.to("cuda") | ||
model = DDP(model, device_ids=[rank]) | ||
|
||
|
@@ -65,7 +65,9 @@ def _test_basic_func(rank, world_size, tempfile_name, test_case, oss, model=None | |
optim.zero_grad() | ||
|
||
if "expected_gain" in test_case: | ||
assert np.allclose(optim.gain(), test_case["expected_gain"]), optim.gain() | ||
assert np.allclose(optim.gain(), test_case["expected_gain"]), "{} vs {}".format( | ||
optim.gain(), test_case["expected_gain"] | ||
) | ||
|
||
if "expected_mean_weight" in test_case: | ||
mean_weight = mean([model.module[i].weight.data.mean().item() for i in range(4)]) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@min-xu-ai @mikerabbat checking with you that this is ok. Since adascale is not changed by this PR, I assumed that the current state was correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is 4.0/3 is the new value? maybe if you init the bias to 0 then the value here won't change? the original value of 2 is because we have two grads from two ranks that are completely independent. it must be the grad from the bias are point to the same direction now, hence 4 grads but 3 directions. So it is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried setting the bias to zero, but the returned expected gain is still 1.3333
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine. I think the new value make sense. Keep it 4.0/3 would be good.