Skip to content

How to use torch-directml in transformer's AutoModel and AutoTokenizer? #464

@fenixlam

Description

@fenixlam

As far as I know, some of the functions can be used with torch-directml.
But I didn't find any examples about Transformers's AutoModel and AutoTokenizer calling with torch-directml.
I want to ask if they can use directml GPU?

Metadata

Metadata

Assignees

No one assigned

    Labels

    pytorch-directmlIssues in PyTorch when using its DirectML backend

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions