Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enhancement]: Bypassing uncomputability issues #127

Open
wlad-svennik opened this issue Mar 27, 2022 · 1 comment
Open

[enhancement]: Bypassing uncomputability issues #127

wlad-svennik opened this issue Mar 27, 2022 · 1 comment
Labels
enhancement New feature or request

Comments

@wlad-svennik
Copy link

Feature description

I'm new to interval computing. I'm a PhD student at Imperial College London. I have a suggestion, and I'm wondering how you handle this.

Certain operations on matrices are uncomputable. For instance, eigendecomposition of even "nice" matrices like symmetric matrices is not computable. What "uncomputable" means in practice is that "forwards numerical stability" is not attainable. The notion of "backwards numerical stability" may still be attainable and can suffice for certain use cases. Unfortunately, interval arithmetic in the naive sense cannot "handle" backwards stability. But this limitation can be by-passed.

My suggestion surrounds eigendecomposition. The API I propose should have three functions:

  • eigendecomp : Mat(Complex) -> (Mat(Complex), DiagMat(Complex)),
  • inv_eigendecomp : (Mat(Complex), DiagMat(Complex)) -> Mat(Complex)
  • lift : (Complex -> Complex) -> ((Mat(Complex), DiagMat(Complex)) -> (Mat(Complex), DiagMat(Complex))).

eigendecomp(M) should produce a pair of matrices (P,D) such that the matrix P is exact (and not interval valued!!) and D is interval-valued. P and D should be chosen so that P D P^-1 contains M as snuggly as possible. The actual eigenvalues and eigenvectors of M don't matter because the eigenvectors of M in particular are not computable.

I also suggest that the purpose of eigendecomposition is to lift functions of type Complex -> Complex to complex matrices. I suspect that the eigenvalues and eigenvectors are perhaps not entirely relevant, in practice.

@wlad-svennik wlad-svennik added the enhancement New feature or request label Mar 27, 2022
@lucaferranti
Copy link
Member

thank you for your interest in the project and sorry for the long delay in answering, very busy week :D

I think your idea sounds very interesting and would make sense to include something like that in the package (maybe the names of the API can be negotiated :) ). I'll try here to summarize your proposal to make sure I understand correctly and ask a few clarifying questions.

If I understand correctly you are proposing that given a matrix M, find a non-interval matrix P and an interval diagonal matrix D so that M is contained in P*D*P^{-1}. I think this would be interesting to explore and maybe it might have some practical utilities e.g. to bound functions of matrices.

  • Do I understand correctly, that the advantage of your proposed approach is that you are relaxing the problem as you are not requiring P to bound the eigenvectors, computing this factorization would be more efficient than verify_eigen or the approach proposed in the paper in [enhancement]: spectral decomposition of interval matrices #116 ? I have a couple of scenarios in mind where this could be useful

  • I could not understand from your description whether the matrix M would be a matrix of floats (single points) or intervals. Which one did you have in mind? These would have slightly different problem formulations:

    • find P, D so that M ∈ PDP⁻¹ (starting from floating point)
    • find P, D so that M ⊆ PDP⁻¹ (starting from interval)
    • The reason I'm asking is because while the latter is more generals and algorithms working for that work also for the former, if you can assume that the starting point is a matrix of floats (or very small intervals), then you can generally find more performant algorithms, so I think it could be good to separate the two cases and start benchmarking only with the former could be an easier start.
  • In your proposal you mention to compute P⁻¹, as you probably know matrix inversion cannot be computed exactly with floating point, if you want to work with intervals and want rigorous computations generally there are two options:

    • Compute P⁻¹ rigorously, meaning in the factorization P would be a matrix of floats, D an interval matrix and P⁻¹ an interval matrix guaranteed to contain the true inverse of P
    • Slightly modify the problem: Find P, D, R so that M ∈ PDR, where D is interval and P, R not. Then probably an approximate inverse of P would be good enough.
  • Did you have in mind an algorithm on how to determine P and D? Can you link some papers I should link to better understand this issue?

If you have further questions / comments, do not hesitate to ping me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants