Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf.linalg.pinv also for complex matrices #44658

Open
zaccharieramzi opened this issue Nov 6, 2020 · 3 comments
Open

tf.linalg.pinv also for complex matrices #44658

zaccharieramzi opened this issue Nov 6, 2020 · 3 comments
Assignees
Labels
comp:ops OPs related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:feature Feature requests

Comments

@zaccharieramzi
Copy link
Contributor

Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template

System information

  • TensorFlow version (you are using): 2.3
  • Are you willing to contribute it (Yes/No): Yes

Describe the feature and the current behavior/state.

Right now, as stated in the docs, the pinv operator accepts only float-like matrices.
It would be nice to have it accept complex matrices as well, like numpy does.

Will this change the current api? How?

No

Who will benefit with this feature?

People working with complex matrices.
I typically work with MRI data which is naturally complex, but I guess this can be applied to other domains.

Any Other info.

I checked in the latest tf.experimental.numpy, and it's absent. But maybe the complex-supported pinv should be better there, I don't know.

@zaccharieramzi zaccharieramzi added the type:feature Feature requests label Nov 6, 2020
@zaccharieramzi zaccharieramzi changed the title tf.linalg.pinv also complex matrices tf.linalg.pinv also for complex matrices Nov 6, 2020
@zaccharieramzi
Copy link
Contributor Author

zaccharieramzi commented Nov 6, 2020

So it's possible to define the pinv for complex matrices using the same structure as the currently implemented pinv, getting rid of the dtype checking.

You can check this in this colab, with a test against the numpy implementation.
I am not sure it works in all edge cases so I do not guarantee anything.

However, I am willing to just submit this as a PR if you think it's enough to just change the dtype checking.

For reference here is the implementation:

def pinv(a, rcond=None):
    """Taken from
    https://github.com/tensorflow/tensorflow/blob/v2.3.1/tensorflow/python/ops/linalg/linalg_impl.py
    """
    dtype = a.dtype.as_numpy_dtype

    if rcond is None:
        def get_dim_size(dim):
            dim_val = a.shape[dim]
            if dim_val is not None:
                return dim_val
            return tf.shape(a)[dim]

        num_rows = get_dim_size(-2)
        num_cols = get_dim_size(-1)
        if isinstance(num_rows, int) and isinstance(num_cols, int):
            max_rows_cols = float(max(num_rows, num_cols))
        else:
            max_rows_cols = tf.cast(tf.maximum(num_rows, num_cols), dtype)
        rcond = 10. * max_rows_cols * np.finfo(dtype).eps

    rcond = tf.convert_to_tensor(rcond, dtype=dtype, name='rcond')

    # Calculate pseudo inverse via SVD.
    # Note: if a is Hermitian then u == v. (We might observe additional
    # performance by explicitly setting `v = u` in such cases.)
    [
        singular_values,  # Sigma
        left_singular_vectors,  # U
        right_singular_vectors,  # V
    ] = tf.linalg.svd(
        a, full_matrices=False, compute_uv=True)

    # Saturate small singular values to inf. This has the effect of make
    # `1. / s = 0.` while not resulting in `NaN` gradients.
    cutoff = tf.cast(rcond, dtype=singular_values.dtype) * tf.reduce_max(singular_values, axis=-1)
    singular_values = tf.where(
        singular_values > cutoff[..., None], singular_values,
        np.array(np.inf, dtype))

    # By the definition of the SVD, `a == u @ s @ v^H`, and the pseudo-inverse
    # is defined as `pinv(a) == v @ inv(s) @ u^H`.
    a_pinv = tf.matmul(
        right_singular_vectors / tf.cast(singular_values[..., None, :], dtype=dtype),
        left_singular_vectors,
        adjoint_b=True)

    if a.shape is not None and a.shape.rank is not None:
      a_pinv.set_shape(a.shape[:-2].concatenate([a.shape[-1], a.shape[-2]]))

    return a_pinv

@Saduf2019 Saduf2019 added the comp:ops OPs related issues label Nov 9, 2020
@Saduf2019 Saduf2019 assigned jvishnuvardhan and unassigned Saduf2019 Nov 9, 2020
@jvishnuvardhan jvishnuvardhan added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Nov 9, 2020
@Ryandry1st
Copy link

I see this has been waiting for two years... any chance of an update? Since it was first raised there are now even more complex datasets that can take advantage of TF and the documentation even suggests that it should work exactly as the numpy implementation, yet it does not support complex data types at the moment.

Honestly not sure why this was never adopted or at least commented on.

@Ryandry1st
Copy link

Ryandry1st commented Feb 2, 2024

I went looking for this functionality and lo and behold, I was the last one to comment on it over a year ago, yet its still missing. Is there any way to get this raised to a priority fast enough to get it solved in under 4 years? @jvishnuvardhan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:ops OPs related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:feature Feature requests
Projects
None yet
Development

No branches or pull requests

4 participants