Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TFTRT]: Revert "Merge pull request #55655" #55922

Conversation

meena-at-work
Copy link
Contributor

@meena-at-work meena-at-work commented May 4, 2022

[TFTRT]: Revert device placement changes as it causes
issues on graphs with nodes that are not runnable on
GPUs.

This reverts commit 0842ad1, reversing
changes made to dd89837.

CC: @bixia1 , @DEKHTIARJonathan @nluehr

…shiv/tftrt-specify-device-during-conversion"

[TFTRT]: Revert device placement changes as it causes
issues on graphs with nodes that are not runnable on
GPUs.

This reverts commit 0842ad1, reversing
changes made to dd89837.
@google-ml-butler google-ml-butler bot added the size:M CL Change Size: Medium label May 4, 2022
@DEKHTIARJonathan
Copy link
Contributor

@bixia1 please merge this PR ASAP to correct the bug previously mentioned over email.
Thanks

@google-ml-butler google-ml-butler bot added kokoro:force-run Tests on submitted change ready to pull PR ready for merge process labels May 4, 2022
@kokoro-team kokoro-team removed the kokoro:force-run Tests on submitted change label May 4, 2022
@meena-at-work
Copy link
Contributor Author

this PR "assigns GPU:0" to be default device of the whole graph, which is perfectly fine as long as all OPs are compatible on GPU. In case the graph has ops that don't run on GPUs, tftrt conversion fails. This needs to be addressed.

2022-05-04 14:46:05.313250: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'Const' OpKernel for GPU devices compatible with node {{node MultipleGridAnchorGenerator/assert_equal/Assert/Assert/data_4}}
     (OpKernel was found, but attributes didn't match) Requested Attributes: dtype=DT_STRING, value=Tensor<type: string shape: [] values: y (MultipleGridAnchorGenerator/add_23:0) = >, _device="/job:localhost/replica:0/task:0/device:GPU:0"
   .  Registered:  device='XLA_CPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 2189089607411394171, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
  device='XLA_GPU_JIT'; dtype in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 2189089607411394171, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_STRING]
  device='DEFAULT'; dtype in [DT_VARIANT]
  device='DEFAULT'; dtype in [DT_BOOL]
  device='DEFAULT'; dtype in [DT_QUINT16]
  device='DEFAULT'; dtype in [DT_QINT16]
  device='DEFAULT'; dtype in [DT_QINT32]
  device='DEFAULT'; dtype in [DT_QUINT8]
  device='DEFAULT'; dtype in [DT_QINT8]
  device='DEFAULT'; dtype in [DT_COMPLEX128]
  device='DEFAULT'; dtype in [DT_COMPLEX64]
  device='DEFAULT'; dtype in [DT_INT8]
  device='DEFAULT'; dtype in [DT_UINT8]
  device='DEFAULT'; dtype in [DT_INT16]
  device='DEFAULT'; dtype in [DT_UINT16]
  device='DEFAULT'; dtype in [DT_UINT32]
  device='DEFAULT'; dtype in [DT_INT64]
  device='DEFAULT'; dtype in [DT_UINT64]
  device='DEFAULT'; dtype in [DT_DOUBLE]
  device='DEFAULT'; dtype in [DT_FLOAT]
  device='DEFAULT'; dtype in [DT_BFLOAT16]
  device='DEFAULT'; dtype in [DT_HALF]
  device='DEFAULT'; dtype in [DT_INT32]
  device='XLA_CPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, 1081042544580029293, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
  device='XLA_GPU'; dtype in [DT_UINT8, DT_QUINT8, DT_UINT16, DT_INT8, DT_QINT8, 1081042544580029293, DT_DOUBLE, DT_COMPLEX64, DT_COMPLEX128, DT_BOOL, DT_BFLOAT16]
  device='GPU'; dtype in [DT_VARIANT]
  device='GPU'; dtype in [DT_BOOL]
  device='GPU'; dtype in [DT_COMPLEX128]
  device='GPU'; dtype in [DT_COMPLEX64]
  device='GPU'; dtype in [DT_UINT64]
  device='GPU'; dtype in [DT_INT64]
  device='GPU'; dtype in [DT_QINT32]
  device='GPU'; dtype in [DT_UINT32]
  device='GPU'; dtype in [DT_QUINT16]
  device='GPU'; dtype in [DT_QINT16]
  device='GPU'; dtype in [DT_INT16]
  device='GPU'; dtype in [DT_UINT16]
  device='GPU'; dtype in [DT_QINT8]
  device='GPU'; dtype in [DT_INT8]
  device='GPU'; dtype in [DT_UINT8]
  device='GPU'; dtype in [DT_DOUBLE]
  device='GPU'; dtype in [DT_FLOAT]
  device='GPU'; dtype in [DT_BFLOAT16]
  device='GPU'; dtype in [DT_HALF]
  device='CPU'

@copybara-service copybara-service bot merged commit fb4c2b9 into tensorflow:master May 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready to pull PR ready for merge process size:M CL Change Size: Medium
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants