Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stable expm #352

Open
wants to merge 17 commits into
base: master
Choose a base branch
from
Open

Stable expm #352

wants to merge 17 commits into from

Conversation

matthagan15
Copy link

@matthagan15 matthagan15 commented Jan 24, 2023

Two new files added, one is source for implementing the standard expm routine used by matlab and scipy, and the other for testing this function against random sparse and dense matrices. There currently is a noticeable error per entry in the dense matrix regime that scales with the dimension of the underlying matrix, I have been unable to find the source of this error. I have ruled out my implementation as it is accurate to f64 accuracy with 1-sparse matrices, indicating that the algorithm is correct but something with dense matrices becomes off. I'm unsure if it is simply my machine (Apple M1 but I doubt it as scipy has a similar error but it is much smaller) or the BLAS/LAPACK library I am using (built in apple Accelerate, I think numpy uses openblas but I couldn't get it configured on my machine). I think this error should be investigated by others, as it is not crippling for matrices less than 100x100 but becomes unusable for scientific (which is why I wrote this) or hardcore engineering purposes beyond 150x150 matrices. This caveat is directly mentioned in the expm docstring. I have more data on this error and can share, I'm not sure what the best channel for this info would be though.

@ethanhs
Copy link

ethanhs commented Feb 8, 2023

built in apple Accelerate, I think numpy uses openblas but I couldn't get it configured on my machine

FWIW accelerate seems to have a lot of bugs, which is why numpy doesn't use it anymore. I believe it uses an ancient version of lapack. If you upload a test I can check if it fails on Linux linking against openblas.

@matthagan15
Copy link
Author

That would be much appreciated! I've been trying to get my system to build against openblas (installed via brew) or intel-mkl but it doesn't seem to work and the --src or sys crates are hard to get to work properly. if you clone my repo (git clone --branch stable_expm https://github.com/matthagan15/ndarray-linalg ) and run the test "expm::test_low_dimension_random_dense_matrix" with --no-release -- it should print out a bunch of useful info. I also have a helper binary crate here https://github.com/matthagan15/expm_test, the build.rs and cargo.toml are currently configured for accelerate I believe so you'd have to change that. The expm_test binary also calls a python script that relies on scipy in case you decide to run it. An alternative would be if anyone could help me figure out how to link to the intel mkl library I have on my system, as I've been unable to get that working. Thanks for the info and any help!

@ethanhs
Copy link

ethanhs commented Feb 9, 2023

@matthagan15 I'm happy to try to help work through build errors, ping me (@ethanhs:ocf.berkeley.edu) on the rust science matrix and I can at least look at any build errors you are experiencing. I think I've had more luck with openblas fwiw.

@ethanhs
Copy link

ethanhs commented Feb 9, 2023

My results from running the expm_test:

<sparse output progress snipped>
dimensions: 1024
average diff norm per epsilon: 0.84546875
###########################################################################################################################
Random dense matrix test
collected 100 samples.
scaling factor: 1
dimensions: 200
diff norm per epsilon: 395469.5984029163 +- (321784.69794002804)
average entry error over epsilon: 1039.7824826961898

@matthagan15
Copy link
Author

matthagan15 commented Feb 9, 2023

Thank you so much! Yeah this is interesting, for example the "average entry error over epsilon" is the most important bit, it's basically taking the computed matrix exponential via expm, subtracting the "known" value from eigenvalues, and then taking the average absolute value of this difference over all entries. Your implementation gives around 1039, my test of scipy with 200x200 dim matrices gives around 82.43558035907401, clearly a significant difference. However, this difference vanishes as the sparsity of the matrix drops, a 1 sparse matrix has error of 0.845 per entry which is about as good as you can get. My run with accelerate gives around 1020, which is very similar to yours, so it does not seem to be a BLAS issue? I'm pretty stumped where the issue could be arising, I'll try to get more data on sparsity vs one-norm (by scaling the matrix eigenvectors down) and put that here.

@matthagan15
Copy link
Author

matthagan15 commented Mar 27, 2023

Ok sorry for the late follow up, but doing some testing of dense matrices up to 1024x1024 matrices reveals a one-norm difference (aka take the output of this expm and subtract the exact exponentiation of eigenvalues) of 3.5 × 10^-9, which is very good. For reference this is only 20x worse than scipy at the same dimension. I'd say that given this error is acceptable at this high of a dimension of a matrix that this should be sufficient for pretty much all but the absolute most accurate required computations, in which case I'd argue using something better than 64 bit floats would be a smarter move. If there is anything else the maintainers would like to see before merging please let me know!

@AlistairKeiller
Copy link

AlistairKeiller commented Apr 14, 2023

@matthagan15, your fork does not build as a dependency for me in a minimal rust project or standalone on the latest version of rust, while the main ndarray-linalg does. Let me know if you can not reproduce this.

@matthagan15
Copy link
Author

I just tested on my device:

cargo new expm_test_build
cd expm_test_build
cargo add ndarray -F blas
cargo add ndarray-linalg
cargo add blas-src -F accelerate

then in Cargo.toml I added the flags git = https://github.com/matthagan15/ndarray-linalg and branch="stable_expm" to the line for ndarray-linalg. It built just fine for me.

Could you share some of the build errors you get? Might help me debug if theres something wrong with the branch.

@AlistairKeiller
Copy link

Odd, here is the error I get. Hopefully this is helpful:

error[E0428]: the name `expm` is defined multiple times
  --> ndarray-linalg/src/lib.rs:79:1
   |
59 | pub mod expm;
   | ------------- previous definition of the module `expm` here
...
79 | pub mod expm;
   | ^^^^^^^^^^^^^ `expm` redefined here
   |
   = note: `expm` must be defined only once in the type namespace of this module

error[E0603]: function `pade_approximation_13` is private
   --> ndarray-linalg/src/expm.rs:309:13
    |
309 |             pade_approximation_13, pade_approximation_3, pade_approximation_5,
    |             ^^^^^^^^^^^^^^^^^^^^^ private function
    |
note: the function `pade_approximation_13` is defined here
   --> ndarray-linalg/src/expm.rs:182:1
    |
182 | / fn pade_approximation_13<S: Scalar<Real = f64> + Lapack>(
183 | |     a_1: &Array2<S>,
184 | |     a_2: &Array2<S>,
185 | |     a_4: &Array2<S>,
...   |
215 | |     inverted.dot(&(odds + evens))
216 | | }
    | |_^

error[E0603]: function `pade_approximation_3` is private
   --> ndarray-linalg/src/expm.rs:309:36
    |
309 |             pade_approximation_13, pade_approximation_3, pade_approximation_5,
    |                                    ^^^^^^^^^^^^^^^^^^^^ private function
    |
note: the function `pade_approximation_3` is defined here
   --> ndarray-linalg/src/expm.rs:89:1
    |
89  | / fn pade_approximation_3<S: Scalar<Real = f64> + Lapack>(
90  | |     a_1: &Array2<S>,
91  | |     a_2: &Array2<S>,
92  | | ) -> Array2<S> {
...   |
104 | |     inverted.dot(&(odds + evens))
105 | | }
    | |_^

error[E0603]: function `pade_approximation_5` is private
   --> ndarray-linalg/src/expm.rs:309:58
    |
309 |             pade_approximation_13, pade_approximation_3, pade_approximation_5,
    |                                                          ^^^^^^^^^^^^^^^^^^^^ private function
    |
note: the function `pade_approximation_5` is defined here
   --> ndarray-linalg/src/expm.rs:107:1
    |
107 | / fn pade_approximation_5<S: Scalar<Real = f64> + Lapack>(
108 | |     a_1: &Array2<S>,
109 | |     a_2: &Array2<S>,
110 | |     a_4: &Array2<S>,
...   |
125 | |     inverted.dot(&(odds + evens))
126 | | }
    | |_^

error[E0603]: function `pade_approximation_7` is private
   --> ndarray-linalg/src/expm.rs:310:13
    |
310 |             pade_approximation_7, pade_approximation_9,
    |             ^^^^^^^^^^^^^^^^^^^^ private function
    |
note: the function `pade_approximation_7` is defined here
   --> ndarray-linalg/src/expm.rs:128:1
    |
128 | / fn pade_approximation_7<S: Scalar<Real = f64> + Lapack>(
129 | |     a_1: &Array2<S>,
130 | |     a_2: &Array2<S>,
131 | |     a_4: &Array2<S>,
...   |
150 | |     inverted.dot(&(odds + evens))
151 | | }
    | |_^

error[E0603]: function `pade_approximation_9` is private
   --> ndarray-linalg/src/expm.rs:310:35
    |
310 |             pade_approximation_7, pade_approximation_9,
    |                                   ^^^^^^^^^^^^^^^^^^^^ private function
    |
note: the function `pade_approximation_9` is defined here
   --> ndarray-linalg/src/expm.rs:153:1
    |
153 | / fn pade_approximation_9<S: Scalar<Real = f64> + Lapack>(
154 | |     a_1: &Array2<S>,
155 | |     a_2: &Array2<S>,
156 | |     a_4: &Array2<S>,
...   |
178 | |     inverted.dot(&(odds + evens))
179 | | }
    | |_^

warning: unused import: `std::ops::MulAssign`
 --> ndarray-linalg/src/expm.rs:1:5
  |
1 | use std::ops::MulAssign;
  |     ^^^^^^^^^^^^^^^^^^^
  |
  = note: `#[warn(unused_imports)]` on by default

warning: unused import: `types`
 --> ndarray-linalg/src/expm.rs:3:13
  |
3 | use crate::{types, Inverse, OperationNorm};
  |             ^^^^^

warning: unused import: `NormType`
 --> ndarray-linalg/src/expm.rs:5:19
  |
5 | use lax::{Lapack, NormType};
  |                   ^^^^^^^^

warning: unused imports: `Complex32 as c32`, `Complex64 as c64`, `Complex`
 --> ndarray-linalg/src/expm.rs:7:19
  |
7 | use num_complex::{Complex, Complex32 as c32, Complex64 as c64};
  |                   ^^^^^^^  ^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^

warning: unused imports: `Eig`, `OperationNorm`, `SVD`, `pade_approximation_13`, `pade_approximation_3`, `pade_approximation_5`, `pade_approximation_7`, `pade_approximation_9`
   --> ndarray-linalg/src/expm.rs:309:13
    |
309 |             pade_approximation_13, pade_approximation_3, pade_approximation_5,
    |             ^^^^^^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^^^^^
310 |             pade_approximation_7, pade_approximation_9,
    |             ^^^^^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^^^^^
311 |         },
312 |         Eig, OperationNorm, SVD,
    |         ^^^  ^^^^^^^^^^^^^  ^^^

warning: unused imports: `*`, `linalg::Dot`
   --> ndarray-linalg/src/expm.rs:314:19
    |
314 |     use ndarray::{linalg::Dot, *};
    |                   ^^^^^^^^^^^  ^

warning: unused imports: `Complex32 as c32`, `Complex64 as c64`, `ComplexFloat`, `Complex`
   --> ndarray-linalg/src/expm.rs:315:23
    |
315 |     use num_complex::{Complex, Complex32 as c32, Complex64 as c64, ComplexFloat};
    |                       ^^^^^^^  ^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^

warning: unused import: `rand::Rng`
   --> ndarray-linalg/src/expm.rs:316:9
    |
316 |     use rand::Rng;
    |         ^^^^^^^^^

warning: unused imports: `collections::HashMap`, `fs::File`, `io::Read`, `str::FromStr`
   --> ndarray-linalg/src/expm.rs:317:15
    |
317 |     use std::{collections::HashMap, fs::File, io::Read, str::FromStr};
    |               ^^^^^^^^^^^^^^^^^^^^  ^^^^^^^^  ^^^^^^^^  ^^^^^^^^^^^^

warning: unused import: `super::expm`
   --> ndarray-linalg/src/expm.rs:319:9
    |
319 |     use super::expm;
    |         ^^^^^^^^^^^

warning: unused import: `Zip`
 --> ndarray-linalg/src/normest1.rs:1:22
  |
1 | use std::iter::{zip, Zip};
  |                      ^^^

warning: unused imports: `Inverse`, `OperationNorm`, `ensure_no_parallel_columns`, `is_column_parallel`, `ndarray::ShapeBuilder`, `prepare_x_matrix`
   --> ndarray-linalg/src/normest1.rs:228:9
    |
228 |         ndarray::ShapeBuilder,
    |         ^^^^^^^^^^^^^^^^^^^^^
229 |         normest1::{ensure_no_parallel_columns, is_column_parallel, prepare_x_matrix},
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^
230 |         Inverse, OperationNorm,
    |         ^^^^^^^  ^^^^^^^^^^^^^

warning: unused import: `ndarray::Array2`
   --> ndarray-linalg/src/normest1.rs:232:9
    |
232 |     use ndarray::Array2;
    |         ^^^^^^^^^^^^^^^

warning: unused imports: `Rng`, `thread_rng`
   --> ndarray-linalg/src/normest1.rs:233:16
    |
233 |     use rand::{thread_rng, Rng};
    |                ^^^^^^^^^^  ^^^

warning: unused imports: `check_if_s_parallel_to_s_old`, `normest`
   --> ndarray-linalg/src/normest1.rs:235:17
    |
235 |     use super::{check_if_s_parallel_to_s_old, normest};
    |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^  ^^^^^^^

warning: unused imports: `Eig`, `OperationNorm`, `SVD`
   --> ndarray-linalg/src/expm.rs:312:9
    |
312 |         Eig, OperationNorm, SVD,
    |         ^^^  ^^^^^^^^^^^^^  ^^^

warning: unused import: `Pow`
 --> ndarray-linalg/src/expm.rs:8:30
  |
8 | use num_traits::{real::Real, Pow};
  |                              ^^^

warning: unused import: `statistics::Statistics`
  --> ndarray-linalg/src/expm.rs:12:5
   |
12 |     statistics::Statistics,
   |     ^^^^^^^^^^^^^^^^^^^^^^

warning: unused import: `real::Real`
 --> ndarray-linalg/src/expm.rs:8:18
  |
8 | use num_traits::{real::Real, Pow};
  |                  ^^^^^^^^^^

warning: unused import: `linalg::Dot`
 --> ndarray-linalg/src/expm.rs:6:15
  |
6 | use ndarray::{linalg::Dot, prelude::*};
  |               ^^^^^^^^^^^

warning: unused import: `num_complex::ComplexFloat`
 --> ndarray-linalg/src/normest1.rs:4:5
  |
4 | use num_complex::ComplexFloat;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^

Some errors have detailed explanations: E0428, E0603.
For more information about an error, try `rustc --explain E0428`.
warning: `ndarray-linalg` (lib) generated 21 warnings
error: could not compile `ndarray-linalg` due to 6 previous errors; 21 warnings emitted

@matthagan15
Copy link
Author

Ah, you are building from the master branch not the stable_expm branch which I put in the pull request. Did I do the pull request incorrectly?

@AlistairKeiller
Copy link

Yep, you are right. My bad. Ya, the stable_expm branch looks good to me.

@matthagan15
Copy link
Author

Wonderful! If maintainers (I'm not sure who) have any concerns please let me know what needs to be changed to get this merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants