Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pyarpack: python binding based on Boost.Python.Numpy exposing C++ API. #238

Merged
merged 2 commits into from
Feb 26, 2020

Conversation

fghoussen
Copy link
Collaborator

Pull request purpose

pyarpack: python binding based on Boost.Python.Numpy exposing C++ API.

Detailed changes proposed in this pull request

  • allow testing on demand sparse or dense matrices
  • allow testing on demand iterative or direct solver
  • need boost.python
  • cmake only (not an autotools guru: maybe @turboencabulator could help here)

@coveralls
Copy link

coveralls commented Feb 13, 2020

Pull Request Test Coverage Report for Build 1121

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 81 unchanged lines in 6 files lost coverage.
  • Overall coverage increased (+0.7%) to 74.08%

Files with Coverage Reduction New Missed Lines %
PARPACK/SRC/MPI/pslamch10.f 2 80.0%
PARPACK/SRC/MPI/pcnaup2.f 3 66.86%
PARPACK/SRC/MPI/pznaup2.f 3 66.86%
PARPACK/SRC/MPI/pzgetv0.f 19 47.17%
PARPACK/SRC/MPI/pcnaitr.f 27 58.7%
PARPACK/SRC/MPI/pznaitr.f 27 58.7%
Totals Coverage Status
Change from base Build 1092: 0.7%
Covered Lines: 16462
Relevant Lines: 22222

💛 - Coveralls

@fghoussen
Copy link
Collaborator Author

BTW, still a few things to push by the week-end on this PR.

Copy link
Contributor

@sylvestre sylvestre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wahou, I wasn't expecting that. This is great!
Could you please add some tests too?
Remove Python2 support
And add license headers in the files

CMakeLists.txt Outdated
@@ -18,6 +18,9 @@ endif ()
option(MPI "Enable parallel support" OFF)
option(ICB "Enable support for *[ae]upd_c with ISO_C_BINDING" OFF)
option(ICBEXMM "Enable support for matrix market example based on ICB" OFF)
option(PYTHON2 "Enable python2 support" OFF)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove python2 support

.travis.yml Outdated
cd /tmp && \
cd arpack-ng && \
git status && \
git log -2 && \
sed -e 's/mpirun /mpirun --allow-run-as-root --oversubscribe /' -i CMakeLists.txt && \
mkdir -p build && cd build && \
cmake -DEXAMPLES=ON -DMPI=ON -DICB=ON .. && \
cmake -DEXAMPLES=ON -DMPI=ON -DICB=ON -DPYTHON2=ON .. && \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python 2 is already dead and will be removed from the next Debian and Ubuntu releases...
Not sure it is a good use of our time :(

@fghoussen
Copy link
Collaborator Author

Wahou, I wasn't expecting that. This is great!

Did that for my own use (like ICB !), so I actually "just" pushed it... :D

Good to hear this sounds good to you (was not sure you would buy it).

AFK, I'll get back to this likely tonight or this week-end.

I use python2 in CI because boost-python and boost-numpy are built for python2 so I can apt-get install them and use them out of the box.

Works with python3 too, but, boost-python must be built for python3, so I need to add, in CI, some wget https\...\boost.tar.gz, unzip and build from source : I remember you prefer to avoid that to keep arpack-ng easy to maintain (which I totally understand).

Also, can likely add CI for both python2 and python3 : you decide.

Let me know.

And add license headers in the files

Never be very clear with license !... Elaborate what you want me to do and I'll do it

@sylvestre
Copy link
Contributor

I have no interest in Python 2 support :)

Don't bother about the license, it is missing everywhere!

@fghoussen
Copy link
Collaborator Author

fghoussen commented Feb 14, 2020

I have no interest in Python 2 support :)

OK, no problem for me ! :D

But, for the CI, I would say libboost-python-dev libboost-numpy-dev are compiled for python2 only and will likely not work with python3. No libboost-python3-dev libboost-numpy3-dev available as far as I remember (packages not yet provided for python3) so I'll need to compile them from source (wget, unzip, build) : fine for me if it's fine for you. Not sure : I'll check this out by the week-end.

Could you please add some tests too?

I needed to test the code as I was writing it, so I kept scripts as tests both for testing and providing simple getting started examples

 >> ls EXAMPLES/PYARPACK/pyarpack*.py.in
EXAMPLES/PYARPACK/pyarpackDenseLDLT.py.in  EXAMPLES/PYARPACK/pyarpackDenseQRPP.py.in       EXAMPLES/PYARPACK/pyarpackSparseCGDiag.py.in  EXAMPLES/PYARPACK/pyarpackSparseLU.py.in
EXAMPLES/PYARPACK/pyarpackDenseLLT.py.in   EXAMPLES/PYARPACK/pyarpackDenseQRRR.py.in       EXAMPLES/PYARPACK/pyarpackSparseCGILU.py.in   EXAMPLES/PYARPACK/pyarpackSparseQR.py.in
EXAMPLES/PYARPACK/pyarpackDenseLUPP.py.in  EXAMPLES/PYARPACK/pyarpackSparseBiCGDiag.py.in  EXAMPLES/PYARPACK/pyarpackSparseLDLT.py.in
EXAMPLES/PYARPACK/pyarpackDenseLURR.py.in  EXAMPLES/PYARPACK/pyarpackSparseBiCGILU.py.in   EXAMPLES/PYARPACK/pyarpackSparseLLT.py.in

@fghoussen
Copy link
Collaborator Author

Rebased + finishing a few things (with python2 as it should work) : when all is OK, I'll kill python3 !

@fghoussen
Copy link
Collaborator Author

fghoussen commented Feb 16, 2020

OK... ILP64 fails : python feature is supported only by cmake yet, and, cmake is not so easy to configure for very specific cases like ILP64 (I prefer autotools who just do what it's told to - cmake is often making stuffs in back-end : looking at traces, seems it does not take BLAS_LIBRARIES and LAPACK_LIBRARIES as is...).

I'll try a few tricks to get ILP64 OK with cmake : if I can't get that to work, I'll drop ILP64 ! Can't afford to tune cmake for ILP64 across CI : could be endless....

@fghoussen
Copy link
Collaborator Author

@sylvestre: just hit a architecture problem. F90 tries to compile C : no way this could work !?... :D will test a fix here. If the fix works I'll rebase that at the bottom of the branch and make a PR for that only (that's a big problem so likely better to identify it with a dedicated commit)

@fghoussen
Copy link
Collaborator Author

Seems like there is yet another problem on ILP64... Just boring...

FAIL: icb_arpack_cpp

====================

ERROR: iparam[4] 0, nev 9, info -3

terminate called after throwing an instance of 'std::domain_error'

  what():  Error inside ARPACK routines

Sounds suspiciously like #230 ?!... So I'll try to fix this as suggested there...

@fghoussen
Copy link
Collaborator Author

If 9a0698f works, would say it will fix #230 too.

Compared icb_parpack_cpp which works to icb_arpack_cpp which breaks: the only "real" difference is :

diff --git a/TESTS/icb_arpack_cpp.cpp b/TESTS/icb_arpack_cpp.cpp
index f80956a..cbb4263 100644
--- a/TESTS/icb_arpack_cpp.cpp
+++ b/TESTS/icb_arpack_cpp.cpp
@@ -29,7 +29,7 @@ void diagonal_matrix_vector_product(Real const* const x, Real* const y) {
 template<typename Real>
 void real_symmetric_runner(double const & tol_check) {
   a_int const N = 1000;
-  a_int const nev = 9;
+  a_int const nev = 3;
 
   a_int ncv = 2 * nev + 1;
   ncv *= 3; /*ease CV: increase space dimension*/
@@ -111,7 +111,7 @@ void diagonal_matrix_vector_product(std::complex<Real> const* const x,
 template<typename Real>
 void complex_symmetric_runner(double const & tol_check) {
   a_int const N = 1000;
-  a_int const nev = 9;
+  a_int const nev = 1;
 
   a_int ncv = 2 * nev + 1;
   ncv *= 3; /*ease CV: increase space dimension*/

Would say computing "too many" EV with ILP64 make things more unstable !?... No idea why but it's the only difference I saw... So the solution would be "compute less EV" to get a stable test.

@fghoussen
Copy link
Collaborator Author

Looking more closely at icb_parpack_cpp and icb_arpack_cpp , I guess I found a bug nest !... F77 is always right so ICB, C, C++, python must follow him : sounds like the sneaky/devious F77 defines iparam and ipntr with different sizes depending on the cases... grep makes this "obvious" :

>> git grep "integer[ ]*iparam" SRC/*[ae]upd*.f
SRC/cnaupd.f:      integer    iparam(11), ipntr(14)
SRC/cneupd.f:      integer    iparam(11), ipntr(14)
SRC/dnaupd.f:      integer    iparam(11), ipntr(14)
SRC/dneupd.f:      integer    iparam(11), ipntr(14)
SRC/dsaupd.f:      integer    iparam(11), ipntr(11)
SRC/dseupd.f:      integer    iparam(7), ipntr(11)
SRC/snaupd.f:      integer    iparam(11), ipntr(14)
SRC/sneupd.f:      integer    iparam(11), ipntr(14)
SRC/ssaupd.f:      integer    iparam(11), ipntr(11)
SRC/sseupd.f:      integer    iparam(7), ipntr(11)
SRC/znaupd.f:      integer    iparam(11), ipntr(14)
SRC/zneupd.f:      integer    iparam(11), ipntr(14)

>> git grep "integer[ ]*iparam" PARPACK/SRC/MPI/*[ae]upd*.f
PARPACK/SRC/MPI/pcnaupd.f:      integer    iparam(11), ipntr(14)
PARPACK/SRC/MPI/pcneupd.f:      integer    iparam(11), ipntr(14)
PARPACK/SRC/MPI/pdnaupd.f:      integer    iparam(11), ipntr(14)
PARPACK/SRC/MPI/pdneupd.f:      integer    iparam(11), ipntr(14)
PARPACK/SRC/MPI/pdsaupd.f:      integer    iparam(11), ipntr(11)
PARPACK/SRC/MPI/pdseupd.f:      integer    iparam(7), ipntr(11)
PARPACK/SRC/MPI/psnaupd.f:      integer    iparam(11), ipntr(14)
PARPACK/SRC/MPI/psneupd.f:      integer    iparam(11), ipntr(14)
PARPACK/SRC/MPI/pssaupd.f:      integer    iparam(11), ipntr(11)
PARPACK/SRC/MPI/psseupd.f:      integer    iparam(7), ipntr(11)
PARPACK/SRC/MPI/pznaupd.f:      integer    iparam(11), ipntr(14)
PARPACK/SRC/MPI/pzneupd.f:      integer    iparam(11), ipntr(14)

icb_parpack_cpp and icb_arpack_cpp use "ds" flavor which has different sizes than the others.

Pushing about that to check out if this fix the ILP64 problem. If so, this really could fix also lots of problems all around the place including #230.

As this is tricky : double-check / code-review by any other people is welcome.

@fghoussen
Copy link
Collaborator Author

Seems now computed EV are bad as it's usually cubersome to succeed to tune the cumbersome-cavernistic dead-ols thing that arpack is... really, this library should be rewritten from scratch... If somebody has a clue on "how to tune [iparam/inptr ? other ?...] arpack in ds flavor" speak up...

@fghoussen
Copy link
Collaborator Author

icb_parpack_cpp and icb_arpack_cpp only constently fail !? There are the only ones to use arpack.hpp or parpack.hpp. @sylvestre : I will try to mute them to see if "all the others" work ... or not !?

@fghoussen
Copy link
Collaborator Author

OK !?... Seems more and more problems show up and are mixing up along the way : this blocks what's behind (pyarpack)

@caliarim : AFAIR, you have (or had) some experience on arpack tuning !... Could you have a quick look and tell if you see something wrong in icb_arpack_cpp.cpp tuning/parameters ? To me look like, the computation runs (aupd OK), but, results can't be retrieved from eupd (always get 0. ?!). The diff is (made of only cosmetic changes, changes to ease the run [kill float run, increase ncv], changes related to the size of iparam/ipntr - for me all these changes should do the same than before but in a slight different way) :

diff --git a/TESTS/icb_arpack_cpp.cpp b/TESTS/icb_arpack_cpp.cpp
index 0e669a6..0914c54 100644
--- a/TESTS/icb_arpack_cpp.cpp
+++ b/TESTS/icb_arpack_cpp.cpp
@@ -29,9 +29,11 @@ void diagonal_matrix_vector_product(Real const* const x, Real* const y) {
 template<typename Real>
 void real_symmetric_runner(double const & tol_check) {
   a_int const N = 1000;
-  a_int const nev = 9;
+  a_int const nev = 3;
+
+  a_int ncv = 2 * nev + 1;
+  ncv *= 3; /*ease CV: increase space dimension*/
 
-  a_int const ncv = 2 * nev + 1;
   a_int const ldv = N;
 
   a_int const ldz = N + 1;
@@ -50,22 +52,23 @@ void real_symmetric_runner(double const & tol_check) {
   std::vector<Real> d((nev + 1));
   std::vector<Real> z((N + 1) * (nev + 1));
 
-  std::array<a_int, 11> iparam{};
+  std::array<a_int, 11> iparam_aupd{};
+  std::array<a_int,  7> iparam_eupd{};
 
-  iparam[0] = 1;
-  iparam[2] = 10 * N;
-  iparam[3] = 1;
-  iparam[4] = 0;  // number of ev found by arpack.
-  iparam[6] = 1;
+  iparam_aupd[0] = iparam_eupd[0] = 1;
+  iparam_aupd[2] = iparam_eupd[2] = 10 * N;
+  iparam_aupd[3] = iparam_eupd[3] = 1;
+  iparam_aupd[4] = iparam_eupd[4] = 0;  // number of ev found by arpack.
+  iparam_aupd[6] = iparam_eupd[6] = 1;
 
-  std::array<a_int, 14> ipntr{};
+  std::array<a_int, 11> ipntr{}; // 11 ONLY for dsaupd / ssaupd
 
   a_int info = 0, ido = 0;
 
   while (ido != 99) {
     arpack::saupd(ido, arpack::bmat::identity, N,
                   arpack::which::largest_magnitude, nev, tol, resid.data(), ncv,
-                  V.data(), ldv, iparam.data(), ipntr.data(), workd.data(),
+                  V.data(), ldv, iparam_aupd.data(), ipntr.data(), workd.data(),
                   workl.data(), lworkl, info);
 
     diagonal_matrix_vector_product(&(workd[ipntr[0] - 1]),
@@ -73,9 +76,9 @@ void real_symmetric_runner(double const & tol_check) {
   }
 
   // check number of ev found by arpack.
-  if (iparam[4] < nev /*arpack may succeed to compute more EV than expected*/ || info != 0) {
-    std::cout << "ERROR: iparam[4] " << iparam[4] << ", nev " << nev << ", info " << info << std::endl;
-    throw std::domain_error("Error inside ARPACK routines");
+  if (iparam_aupd[4] < nev /*arpack may succeed to compute more EV than expected*/) {
+    std::cout << "ERROR: iparam[4] " << iparam_aupd[4] << ", nev " << nev << ", info " << info << std::endl;
+    throw std::domain_error("real_symmetric_runner - Error: inside ARPACK routines");
   }
 
   std::vector<a_int> select(ncv);
@@ -84,7 +87,7 @@ void real_symmetric_runner(double const & tol_check) {
   arpack::seupd(rvec, arpack::howmny::ritz_vectors, select.data(), d.data(),
                 z.data(), ldz, sigma, arpack::bmat::identity, N,
                 arpack::which::largest_magnitude, nev, tol, resid.data(), ncv,
-                V.data(), ldv, iparam.data(), ipntr.data(), workd.data(),
+                V.data(), ldv, iparam_eupd.data(), ipntr.data(), workd.data(),
                 workl.data(), lworkl, info);
 
   for (int i = 0; i < nev; ++i) {
@@ -92,7 +95,7 @@ void real_symmetric_runner(double const & tol_check) {
 
     /*eigen value order: smallest -> biggest*/
     if (std::abs(d[i] - static_cast<Real>(1000 - (nev - 1) + i)) > tol_check) {
-      throw std::domain_error("Correct eigenvalues not computed");
+      throw std::domain_error("real_symmetric_runner - Correct eigenvalues not computed");
     }
   }
   std::cout << "------\n";
@@ -109,9 +112,11 @@ void diagonal_matrix_vector_product(std::complex<Real> const* const x,
 template<typename Real>
 void complex_symmetric_runner(double const & tol_check) {
   a_int const N = 1000;
-  a_int const nev = 9;
+  a_int const nev = 1;
+
+  a_int ncv = 2 * nev + 1;
+  ncv *= 3; /*ease CV: increase space dimension*/
 
-  a_int const ncv = 2 * nev + 1;
   a_int const ldv = N;
 
   a_int const ldz = N + 1;
@@ -154,9 +159,9 @@ void complex_symmetric_runner(double const & tol_check) {
   }
 
   // check number of ev found by arpack.
-  if (iparam[4] < nev /*arpack may succeed to compute more EV than expected*/ || info != 0) {
+  if (iparam[4] < nev /*arpack may succeed to compute more EV than expected*/) {
     std::cout << "ERROR: iparam[4] " << iparam[4] << ", nev " << nev << ", info " << info << std::endl;
-    throw std::domain_error("Error inside ARPACK routines");
+    throw std::domain_error("complex_symmetric_runner - Error: inside ARPACK routines");
   }
 
   std::vector<a_int> select(ncv);
@@ -174,7 +179,7 @@ void complex_symmetric_runner(double const & tol_check) {
     /*eigen value order: smallest -> biggest*/
     if (std::abs(std::real(d[i]) - static_cast<Real>(1000 - (nev - 1) + i)) > tol_check ||
         std::abs(std::imag(d[i]) - static_cast<Real>(1000 - (nev - 1) + i)) > tol_check) {
-      throw std::domain_error("Correct eigenvalues not computed");
+      throw std::domain_error("complex_symmetric_runner - Correct eigenvalues not computed");
     }
   }
 }
@@ -183,7 +188,6 @@ int main() {
   sstats_c();
 
   // arpack without debug
-  real_symmetric_runner<float>(1);
   real_symmetric_runner<double>(1.e-05);
 
   a_int nopx_c, nbx_c, nrorth_c, nitref_c, nrstrt_c;
@@ -206,7 +210,6 @@ int main() {
           1);
 
   // arpack with debug
-  complex_symmetric_runner<float>(1);
   complex_symmetric_runner<double>(1.e-05);
 
   return 0;

My feeling (not sure) at the time is that the problem may be in arpack.hpp where #include <complex> and #include <complex.h> must coexist for both std::complex and reinterpret_cat<_Complex float> => I would say they likely step (different definitions for the same thing ? or related stuffs ?) on each other feet, and that, these incompatibility could be muted on LP64 (previously, when the "iparam/ipntr bug size" was not yet applied) but blows-up on ILP64 and on LP64 when the "iparam/ipntr bug size" was is applied. This is all mixed and messy : need to split things apart...

I'am not sure here : still trying to understand where the problem could come from.

@sylvestre : can't and won't fix all of this mixed mess ! What I propose is :

  • I take this PR apart (which mix too many problems at one time) as a "reminder"

  • from this PR, I will create several PRs (likely based on each other) in a step-by-step way for each dedicated problem where a solution could be found (won't fix all - this must end somewhere...) : you'll have to merge these PR in the "right" step-by-step order.

OK with you ?

@fghoussen
Copy link
Collaborator Author

@sylvestre: will try to begin splitting PR

@fghoussen
Copy link
Collaborator Author

Not payed here. Can't afford endlessly stepping back. Just bored. Take it or leave it.

@sylvestre
Copy link
Contributor

@fghoussen not paid either... ;)
As said, I don't want to see python 2 support. It is already a dead project.

@sylvestre sylvestre merged commit cfdb5f6 into opencollab:master Feb 26, 2020
@sylvestre
Copy link
Contributor

bravo!

@@ -0,0 +1,125 @@
#!/usr/bin/env python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the .in could be renamed as we don't plan to run with py2?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you like. I added .in as there are some *.py.in which can not be directly used as python scripts (need replacement of PYINT variable to handle ILP64 case... But the repo does not compile in ILP64 : I discovered that after which led me to others problems likely related to #230)

apt-get -y install python3-minimal python3-pip python3-numpy && \
pip3 install numpy && \
apt-get -y install wget && \
wget https://sourceforge.net/projects/boost/files/boost/1.67.0/boost_1_67_0.tar.gz && \
Copy link
Collaborator Author

@fghoussen fghoussen Feb 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sylvestre : as you want python3 only, we need to build from source : wget, tar, ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? Debian works without doing that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Boost 1.67 exists in Debian testing:
https://tracker.debian.org/pkg/boost-defaults
why not running this part of the CI just on this distro ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debian provides a package libboost-python-dev wich is built for python2 , not for python 3, that's why boost must be built-from-source for python3. If you can replace boost-build-from-source by an apt-get install would be good.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure ?
https://packages.debian.org/sid/libboost-python1.67-dev
% apt-cache show libboost-python1.67-dev|grep python3 Depends: libboost1.67-dev (= 1.67.0-17), libboost-python1.67.0 (= 1.67.0-17), python-dev, python3-dev

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIR, does not work on my debian box for missing symbols : compil or tests fails. #248 should fail

cd /tmp && \
cd arpack-ng && \
git status && \
git log -2 && \
sed -e 's/mpirun /mpirun --allow-run-as-root --oversubscribe /' -i CMakeLists.txt && \
mkdir -p build && cd build && \
cmake -DEXAMPLES=ON -DMPI=ON -DICB=ON .. && \
cmake -DEXAMPLES=ON -DMPI=ON -DPYTHON3=ON -DBOOST_PYTHON_LIBSUFFIX='36' .. && \
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sylvestre : as you want python3 only, BOOST_PYTHON_LIBSUFFIX may change if apt-get upgrade change the default python version on bionic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the suffix detection could be done automatically, no?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't know. If you don't set it, cmake is lost and find nothing. I think the reason is that all is done in such a way you can have a boost lib for any python version X.Y so automatic detection would not be so "obvious"...

@fghoussen
Copy link
Collaborator Author

@turboencabulator : could you help at having --enable-python3 at autotools side ? (i.e. doing the same than cmake -DPYTHON3=ON). Impossible to get cmake OK on CI for ILP64 #244 (comment) : so impossible de test pyarpack on ILP64 where autotools is working...

@fghoussen fghoussen deleted the python branch March 5, 2020 12:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants