Skip to content

Commit

Permalink
fix mlu device longTensor bugs (#2887)
Browse files Browse the repository at this point in the history
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* fix mlu longtensor.to() bugs.

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
  • Loading branch information
huismiling and muellerzr committed Jul 3, 2024
1 parent eac206f commit fec1170
Showing 1 changed file with 0 additions and 3 deletions.
3 changes: 0 additions & 3 deletions src/accelerate/utils/operations.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,9 +151,6 @@ def send_to_device(tensor, device, non_blocking=False, skip_keys=None):
device = "npu:0"
if device == "xpu":
device = "xpu:0"
# TODO: torch_mlu LongTensor.to(<int num>) has bugs, we will fix this later.
if is_torch_tensor(tensor) and tensor.device.type in ["mlu"] and tensor.dtype in [torch.int64]:
tensor = tensor.cpu()
try:
return tensor.to(device, non_blocking=non_blocking)
except TypeError: # .to() doesn't accept non_blocking as kwarg
Expand Down

0 comments on commit fec1170

Please sign in to comment.