Skip to content

Conversation

jeejakp12
Copy link
Contributor

Numpy array is chosen to be the rebuild component for
HPU. so add it to the backend list.

Signed-off-by: Ayman Yousefayousef@habana.ai
Signed-off-by: Jeeja jeejakp@habana.ai

Fixes #ISSUE_NUMBER

Numpy array is chosen to be the rebuild component for
HPU. so add it to the backend list.

Signed-off-by: Ayman Yousef<ayousef@habana.ai>
Signed-off-by: Jeeja <jeejakp@habana.ai>
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 25, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit f72a888 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@mrshenli mrshenli requested a review from albanD March 27, 2022 19:08
@mrshenli mrshenli added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: backend non-standard backend support labels Mar 27, 2022
@albanD
Copy link
Collaborator

albanD commented Mar 29, 2022

Would it be possible to have a test for this? To make sure it works as expected and doesn't break in the future?

@jeejakp12
Copy link
Contributor Author

Would it be possible to have a test for this? To make sure it works as expected and doesn't break in the future?

@albanD the backend tests are in not in fork, for cpu or gpu this is not valid. any suggestion on where to add it. Below is the test that was reproducing the issue for hpu backend tensor(torch.save cause the problem)
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
from torch.distributed.optim import ZeroRedundancyOptimizer
from torch.nn.parallel import DistributedDataParallel as DDP
from habana_frameworks.torch.utils.library_loader import load_habana_module
load_habana_module()
import habana_frameworks.torch.core.hccl

def example(rank, world_size, use_zero):
torch.manual_seed(0)

os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
# create default process group
dist.init_process_group("hccl", rank=rank, world_size=world_size)
rank = torch.distributed.get_rank()
# create local model
model = nn.Sequential(*[nn.Linear(2000, 2000).to('hpu') for _ in range(2)])

# construct DDP model
ddp_model = DDP(model)

# define loss function and optimizer
loss_fn = nn.MSELoss()
if use_zero:
    optimizer = ZeroRedundancyOptimizer(
        ddp_model.parameters(),
        optimizer_class=torch.optim.Adam,
        lr=0.01
    )

# forward pass
outputs = ddp_model(torch.randn(20, 2000).to('hpu'))
labels = torch.randn(20, 2000).to('hpu')
# backward pass
loss_fn(outputs, labels).backward()

# update parameters
optimizer.step()
if use_zero:
  optimizer.consolidate_state_dict()
print(f"params sum is: {sum(model.parameters()).sum()}")

def main():
world_size = 2
print("=== Using ZeroRedundancyOptimizer ===")
mp.spawn(example,
args=(world_size, True),
nprocs=world_size,
join=True)

if name=="main":
main()

@albanD
Copy link
Collaborator

albanD commented Mar 31, 2022

Well if there is no CI config that supports this, then this is ok. And your testing will handle it.

@albanD
Copy link
Collaborator

albanD commented Mar 31, 2022

@pytorchbot merge this please

@github-actions
Copy link
Contributor

Hey @jeejakp12.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Apr 2, 2022
Summary:
Numpy array is chosen to be the rebuild component for
HPU. so add it to the backend list.

Signed-off-by: Ayman Yousef<ayousef@habana.ai>
Signed-off-by: Jeeja <jeejakp@habana.ai>

Fixes #ISSUE_NUMBER

Pull Request resolved: #74738
Approved by: https://github.com/albanD

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/88096253ef3f04f9a6211ce958d2401e925e5d2f

Reviewed By: malfet, atalman

Differential Revision: D35317405

fbshipit-source-id: daa87627ab0028b672591368715e22f06a942aa5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed module: backend non-standard backend support open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants