Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request to adapt to Ascend NPU #76

Open
qyliuAI opened this issue Mar 25, 2024 · 0 comments
Open

Request to adapt to Ascend NPU #76

qyliuAI opened this issue Mar 25, 2024 · 0 comments

Comments

@qyliuAI
Copy link

qyliuAI commented Mar 25, 2024

Feature Request

Thanks for your great work. I have tried running this model on the Ascend NPU device and found that it can run perfectly as long as it is adapted to the following two lines of code.

import torch_npu
from torch_npu.contrib import transfer_to_npu

Considering that more and more users may have NPU devices now, I hope this model can naturally support selecting NPU devices to run on.I can submit the corresponding code after thorough self testing.So I want to know if you are willing to do this adaptation.Looking forward to your reply.

Background

Ascend is a full-stack AI computing infrastructure for industry applications and services based on Huawei Ascend processors and software. For more information about Ascend, see Ascend Community.

CANN (Compute Architecture of Neural Networks), developped by Huawei, is a heterogeneous computing architecture for AI.

Pytorch has officially announced support for Ascend NPU (through key PrivateUse1), please see the PrivateUse1 tutorial here. We alse have our own PyTorch adapter which we called torch npu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant