Skip to content
This repository was archived by the owner on Mar 30, 2022. It is now read-only.
This repository was archived by the owner on Mar 30, 2022. It is now read-only.

Eager tensors always report being on CPU Device despite documentation #524

@garymm

Description

@garymm

I'm playing with https://www.tensorflow.org/swift/tutorials/introducing_x10. Both locally and on Colab, the eager tensor shows up on the CPU. The text says If you are running this notebook on a GPU-enabled instance, you should see that hardware reflected in the device description above.

Even if I try to force it to the GPU, it seems to stay on the CPU:

let eagerGPU = Device(kind: .GPU, ordinal: 0, backend: .TF_EAGER)
let eagerTensor1 = Tensor([0.0, 1.0, 2.0], on: eagerGPU)
let eagerTensor2 = Tensor([1.5, 2.5, 3.5], on: eagerGPU)
let eagerTensorSum = eagerTensor1 + eagerTensor2
eagerTensor1.device

Output:

▿ Device(kind: .CPU, ordinal: 0, backend: .TF_EAGER)
  - kind : TensorFlow.Device.Kind.CPU
  - ordinal : 0
  - backend : TensorFlow.Device.Backend.TF_EAGER

So I'd say there may be 2 bugs here:

  1. Either the documentation is wrong and eager tensors are only supposed to be able to use the CPU, or the documentation is right and code is buggy and doesn't use the GPU, and
  2. If the documentation is wrong, creating a tensor with an eager GPU should fail rather than silently run on the CPU.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions