Optimizing scatter_nd_* for complex tensors #40672
Labels
comp:ops
OPs related issues
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
type:feature
Feature requests
System information
Describe the feature and the current behavior/state.
Currently, as noted in #40605 , the
scatter_nd_*
functions are extremely slow.However, this is even more extreme when used on complex tensors. This colab notebook illustrates this fact.
A simple hack consisting in treating separately real and imaginary parts makes us gain a factor 20 on computation time.
I guess at least this hack could be implemented at the python level of the op, a part on which I am definitely willing to contribute.
Will this change the current api? How?
No
Who will benefit with this feature?
Everyone using complex tensors, so I would say people working in sound processing and MRI, but this is surely not exhaustive.
Any Other info.
I don't know if this is due to eager execution or not. I don't necessarily know how to best profile these kind of issues in graph mode.
I think even if this is eager-related it still deserves a fix.
The text was updated successfully, but these errors were encountered: