Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eigenvalues derivatives and status of the project #191

Open
giacomo-b opened this issue Oct 29, 2021 · 11 comments
Open

Eigenvalues derivatives and status of the project #191

giacomo-b opened this issue Oct 29, 2021 · 11 comments

Comments

@giacomo-b
Copy link

First of all, keep up the great work! I think this library is awesome and I'd like to contribute one day.

I have a matrix M whose entries depend on one or more input parameters, and I am computing the eigenvalues of M using spectra. What I would like to achieve using autodiff is the computation of the derivatives of these eigenvalues w.r.t. the input parameters (basically, their sensitivities). Since spectra supports custom types I thought it might be possible to have it work seamlessly with autodiff.

The problem I am facing is that I don't know:

  1. if this is possible using autodiff
  2. assuming it can be done, which types I should use to achieve this.

I went through the tutorials but didn't find anything similar.

The steps are:

  1. Choose a parameter p
  2. Compute a matrix M as a function of p
  3. Compute the eigenvalues eigs of M (done using spectra)
  4. Compute the derivatives of eigs w.r.t. p (which of course go through the computation of M and the eigensolver)

Simple example for one input parameter and a 2x2 matrix:

// Given a parameter p, compute the M(p)
// Should probably return an autodiff type (MatrixXreal?) and take a const ref as argument
Eigen::MatrixXd M(double p)
{
    Eigen::MatrixXd output;
    output << 2 * p,  p * p,
              4 * p,    p;
    return output;
}

// Use spectra to compute the eigenvalues
Eigen::VectorXd eigs = ... // The eigenvalues should probably be saved in a VectorXreal

// Compute sensitivities using autodiff (d(eigs) / dp)
// ?

Do you think this is possible? If so, could you please show me how or point me to some related examples?

Also, quick question: what is the current status of the project? I noticed that the release version is not 1.0 yet, so I was wondering if there are any specific features in the pipeline that are currently not supported but you think are worth mentioning.

Thanks!

@allanleal
Copy link
Member

Hi @giacomo-b ,

This should be possible. I've started creating an example here, but noticed that some things will need to be implemented (e.g., Eigen uses std::isfinite on the number type of the matrix when evaluating eigenvalues, and this is causing compilation errors).

Please try the idea below (it does not compile, but I think you should be able to implement missing parts to get it working):

// C++ includes
#include <iostream>
#include <complex>
using namespace std;

// Eigen includes
#include <Eigen/Eigenvalues>

// autodiff include
#include <autodiff/forward/dual.hpp>
#include <autodiff/forward/dual/eigen.hpp>
using namespace autodiff;

using cxdual = complex<dual>;

MatrixXdual assembleMatrix(dual p, dual q)
{
    return MatrixXdual({
        { 2.0*p, p*q },
        { 4.0*q, p*q }
    });
}

int main()
{
    dual p = 1.0;
    dual q = 1.0;

    MatrixXdual M;

    seed(p);
    M = assembleMatrix(p, q);
    unseed(p);

    auto lambdas_p = M.eigenvalues(); // for each complex a + ib in lambdas_p, a.grad and b.grad are ∂a/∂p and ∂b/∂p

    seed(q);
    M = assembleMatrix(p, q);
    unseed(q);

    auto lambdas_q = M.eigenvalues(); // for each complex a + ib in lambdas_q, a.grad and b.grad are ∂a/∂q and ∂b/∂q
}

As to v1.0, this should be happening soon!

@giacomo-b
Copy link
Author

Hi @allanleal,

Thanks a lot for the detailed answer, I will try to implement this asap and will get back to you!

@allanleal
Copy link
Member

allanleal commented Oct 29, 2021 via email

@mattarroz
Copy link

mattarroz commented Nov 17, 2021

Hi @allanleal,

thumbs up for autodiff, great piece of software!

This should be possible. I've started creating an example here, but noticed that some things will need to be implemented (e.g., Eigen uses std::isfinite on the number type of the matrix when evaluating eigenvalues, and this is causing compilation errors).

I added a T cast operator (acting as a double cast usually) to the Real class (see mattarroz@3a71122). With that operator std::isfinite works on Real. In consequence Eigenvalue/Eigenvector (forward) derivative computation using Eigen is also working. If you think it is a good idea, I can create a PR.

Edit: sorry, forgot to push the latest version

@allanleal
Copy link
Member

Hi @mattarroz , a PR for this would be great! Thanks a lot for this.

Does the same change you did for Real also fixes eigenvalue/eigenvector computations with Dual?

@mattarroz
Copy link

mattarroz commented Nov 17, 2021

For the Dual-version, I did this change. For the program you posted to work, I needed to change assembleMatrix to

MatrixXdual assembleMatrix(dual p, dual q)
{
  MatrixXdual ret(2,2);
  ret <<  2.0*p, p*q , 4.0*q, p*q;
  return ret;
}

It compiles and runs, however I didn't do any checks if the numbers are correct. For the Real version, I'm using the derivatives already for a Quasi-Newton algorithm, so it should be fine. Created #193.

Edit: Pushed the commit instead of just writing about it.

@giacomo-b
Copy link
Author

Hey @mattarroz, I cloned your fork but this is still not working for me, even using Real.

The following doesn't compile:

#include <Eigen/Eigenvalues>
#include <autodiff/forward/real.hpp>
#include <autodiff/forward/real/eigen.hpp>

using namespace autodiff;

MatrixXreal assembleMatrix(real p) {
    return MatrixXreal({{2.0 * p, p * p},
                        {4.0 * p, p}});
}

int main() {
    real p = 1.0;

    MatrixXreal M;

    seed(p);
    M = assembleMatrix(p);
    unseed(p);

    auto lambdas_p = M.eigenvalues();
}

The error is due to the eigensolver: error: no type named ‘__type’ in ‘struct __gnu_cxx::__enable_if<false, bool>’.

Am I missing something? Thanks!

@mattarroz
Copy link

mattarroz commented Nov 19, 2021

The error is due to the eigensolver: error: no type named ‘__type’ in ‘struct __gnu_cxx::__enable_if<false, bool>’.

Am I missing something? Thanks!

Upon adding the cast operator and changing the assembleMatrix method as described in the previous post, it compiles for me for both real and dual. I am using eigen 3.3.9, tried clang 11.0.1-2 and gcc 10.2.1 both using the C++17 standard. Can you maybe post the full compiler output? For your convenience, I pushed the changes necessary to make it compile and run with dual too to a new branch: https://github.com/mattarroz/autodiff/tree/feature/RealAndDualEigenvalues. Remember this is a hack, I am not sure if it is correct for dual, as said before.

Here is the full program I compiled for real:

// C++ includes
#include <iostream>

#include <Eigen/Eigenvalues>
#include <autodiff/forward/real.hpp>
#include <autodiff/forward/real/eigen.hpp>

using namespace autodiff;

MatrixXreal assembleMatrix(real p) {
  MatrixXreal ret(2,2);
  ret << 2.0 * p, p * p, 4.0 * p, p;
  return ret;
}

MatrixXreal assembleMatrixDiagonal(real p) {
  MatrixXreal ret(2,2);
  ret << p, 0, 0, p;
  return ret;
}

int main() {
  real p = 1.0;

  MatrixXreal M;

  seed(p);
  M = assembleMatrix(p);
  unseed(p);

  auto lambdas_p = M.eigenvalues();
  std::cout << "Lambdas_p:\n" << lambdas_p << std::endl;


  real x = 3.141;
  real u;
  auto grad = ::gradient ([](const real &p_x) {
    const MatrixXreal M = assembleMatrix (p_x);
    return M.eigenvalues ()(0,0).real();
  }, wrt(x), at(x), u);

  std::cout << "Lambdas_p(0,0):\n" << u << std::endl;
  std::cout << "dLambdas_p(0,0)/dp: \n" << grad << std::endl;

  grad = ::gradient ([](const real &p_x) {
    const MatrixXreal M = assembleMatrixDiagonal(p_x);
    return M.eigenvalues ()(0,0).real();
  }, wrt(x), at(x), u);

  std::cout << "Lambdas_p(0,0):\n" << u << std::endl;
  std::cout << "dLambdas_p(0,0)/dp: \n" << grad << std::endl;
}

Output:

Lambdas_p:
  (3.56155,0)
(-0.561553,0)
Lambdas_p(0,0):
15.9552
dLambdas_p(0,0)/dp: 
6.83458
Lambdas_p(0,0):
3.141
dLambdas_p(0,0)/dp: 
1

Edit: Cleaned up the lambda expression a little

@giacomo-b
Copy link
Author

@mattarroz thank you for taking the time, this is very helpful! It is now working, I am not sure what was going on before (did you change anything on the new branch?)

You mentioned that you haven't validated the results, right? If so, I can proceed and do that.

@mattarroz
Copy link

@mattarroz thank you for taking the time, this is very helpful! It is now working, I am not sure what was going on before (did you change anything on the new branch?)

No big deal, you're welcome! I didn't change anything except for removing the explicit keyword from the cast operator in Dual.

You mentioned that you haven't validated the results, right? If so, I can proceed and do that.

That would be great!

@giacomo-b
Copy link
Author

Awesome, I'll post an update here as soon as I get that done. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants