New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.linalg.pinv also for complex matrices #44658
Comments
So it's possible to define the You can check this in this colab, with a test against the However, I am willing to just submit this as a PR if you think it's enough to just change the For reference here is the implementation: def pinv(a, rcond=None):
"""Taken from
https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/ops/linalg/linalg_impl.py
"""
dtype = a.dtype.as_numpy_dtype
if rcond is None:
def get_dim_size(dim):
dim_val = a.shape[dim]
if dim_val is not None:
return dim_val
return tf.shape(a)[dim]
num_rows = get_dim_size(-2)
num_cols = get_dim_size(-1)
if isinstance(num_rows, int) and isinstance(num_cols, int):
max_rows_cols = float(max(num_rows, num_cols))
else:
max_rows_cols = tf.cast(tf.maximum(num_rows, num_cols), dtype)
rcond = 10. * max_rows_cols * np.finfo(dtype).eps
rcond = tf.convert_to_tensor(rcond, dtype=dtype, name='rcond')
# Calculate pseudo inverse via SVD.
# Note: if a is Hermitian then u == v. (We might observe additional
# performance by explicitly setting `v = u` in such cases.)
[
singular_values, # Sigma
left_singular_vectors, # U
right_singular_vectors, # V
] = tf.linalg.svd(
a, full_matrices=False, compute_uv=True)
# Saturate small singular values to inf. This has the effect of make
# `1. / s = 0.` while not resulting in `NaN` gradients.
cutoff = tf.cast(rcond, dtype=singular_values.dtype) * tf.reduce_max(singular_values, axis=-1)
singular_values = tf.where(
singular_values > cutoff[..., None], singular_values,
np.array(np.inf, dtype))
# By the definition of the SVD, `a == u @ s @ v^H`, and the pseudo-inverse
# is defined as `pinv(a) == v @ inv(s) @ u^H`.
a_pinv = tf.matmul(
right_singular_vectors / tf.cast(singular_values[..., None, :], dtype=dtype),
left_singular_vectors,
adjoint_b=True)
if a.shape is not None and a.shape.rank is not None:
a_pinv.set_shape(a.shape[:-2].concatenate([a.shape[-1], a.shape[-2]]))
return a_pinv |
I see this has been waiting for two years... any chance of an update? Since it was first raised there are now even more complex datasets that can take advantage of TF and the documentation even suggests that it should work exactly as the numpy implementation, yet it does not support complex data types at the moment. Honestly not sure why this was never adopted or at least commented on. |
I went looking for this functionality and lo and behold, I was the last one to comment on it over a year ago, yet its still missing. Is there any way to get this raised to a priority fast enough to get it solved in under 4 years? @jvishnuvardhan |
Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template
System information
Describe the feature and the current behavior/state.
Right now, as stated in the docs, the
pinv
operator accepts only float-like matrices.It would be nice to have it accept complex matrices as well, like
numpy
does.Will this change the current api? How?
No
Who will benefit with this feature?
People working with complex matrices.
I typically work with MRI data which is naturally complex, but I guess this can be applied to other domains.
Any Other info.
I checked in the latest
tf.experimental.numpy
, and it's absent. But maybe the complex-supportedpinv
should be better there, I don't know.The text was updated successfully, but these errors were encountered: