Skip to content

[Feature] Possibility to pass non_blocking attribute to convert_tensor #231

@vfdev-5

Description

@vfdev-5

Following torch docs on CUDA memory management if user configures DataLoader with pin_memory=True then
he/she can use asynchronous GPU copies, just passing an additional non_blocking=True argument to a cuda() call or tensor method .to().

In ignite this can be done with utils method convert_tensor that accepts device argument: cpu, cuda or cuda:N. It would be great to have a possibility to pass non_blocking as kwargs.

In addition, this should be also propaged until create_supervised_trainer and _prepare_batch.

PS: it is okay to call x.to('cpu', non_blocking=True), no error raised.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions