Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EXTENDED] PR 26 Parallelized multiple right-hand side support, fully typed tools #27

Open
wants to merge 58 commits into
base: main
Choose a base branch
from

Conversation

MothNik
Copy link

@MothNik MothNik commented Jun 9, 2024

⚠️ This pull request can fully replace #26 which can basically be closed ⚠️

Update June 10, 2024

I temporarily converted this to a draft because the decisions for #28 heavily affect this PR because this might involve another change in the Cython-level API.

Changes

This pull request is an improved version of #26, so please refer to the basic changes in there. On top of these changes, this pull request

  • fully types the tools-module
  • parallelizes the Cython-implementation which reqiured
    • restricting the Cython level interface in solver.pyxd so that the solvers only accept C-contiguous Arrays. This will not be backwards compatible because it could already be that users pass Fortran-contiguous Arrays (maybe without even knowing).
    • refactoring the C-level functions to deal with pointers rather than memoryviews which allows to release the GIL and thus parallelize the right-hand side problems in the first place
    • changing the setup.py to include OpenMP
  • adds the worker-argument to the Python interface

Since all that is a breaking change and the Cython interface is not backwards-compatible, I suggest that this is a major version jump from 1 to 2 (maybe as an $\alpha$-version of it?).
Again, I tried to update the changelog, but it might be that you need to still look over it.

Tests

The installation and parallelisation were tested on Windows 11 and Ubuntu (WSL). ✅
Tests now cover serial and parallelel solves, but they have to run with

pytest --cov=pentapy .\tests --cov-report html -x 

now to prevent interfering with the multithreading. ✅
⚠️ Note that the -x cancels the process for the first failure which is mandatory for this delicate topic where we deal with pointers ⚠️
Depending on the OS, it might be that the doctests of tests/util_funcs fail because the Array output includes the dtype on Windows while it does not on Linux. ❌
Given that the C-implementation is already very fast in serial, the parallelization does not cause a quantum leap. With 8 threads on a relatively old laptop (there it is, the grain of salt 🧂), I observe a threefold speedup for huge systems (1,000 x 1,000 with 10,000 right-hand sides):

image

On massively parallel systems, this will give better speedups.
However, it does not negatively affect the serial solves when workers=1:

ParallelizedSingleRun

MothNik added 30 commits May 25, 2024 17:42
…proved import chain; improved code readability
…utput and warning behaviour; tested it altogether
…tate development without compromising on build
@MothNik MothNik changed the title [EXTENDED] **11** Parallelized multiple right-hand side support, fully typed tools [EXTENDED] 11 Parallelized multiple right-hand side support, fully typed tools Jun 9, 2024
@MothNik MothNik changed the title [EXTENDED] 11 Parallelized multiple right-hand side support, fully typed tools [EXTENDED] PR 26 Parallelized multiple right-hand side support, fully typed tools Jun 9, 2024
… fucking comment; augmented tests to cover this error
Copy link
Author

@MothNik MothNik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guys, you really declared war on making code maintainable 😅 How should anybody infer that np.array is used for both converting lists to Arrays and making a copy at the same time?
I now added a comment and made the tests cover this.

@MothNik
Copy link
Author

MothNik commented Jun 10, 2024

The last commit currently does not change the internal logic at all.
However, it already forms the foundation of future validations, e.g., by means of the condition number of the matrix.
Right now, pentapy does never assess or allow the user to assess the quality of the solves because the result not being all np.nan, this does not mean that the solve was meaningful.
If the user wants to have the results validated, one could introduce validate to the Python high-level interface of solve.
The C-level interface was broken anyway, so I tried to keep it flexible for all given future scenarios. Right now, the validation logic does not trigger anything, so this buys flexibility in the future at no cost today.

The idea was taken from LAPACK. It returns an info-value together with the solution. It is usually 0 to indicate success while all other values encode certain warnings or even errors.

src/pentapy/core.py Outdated Show resolved Hide resolved
…ers I and II:

- now, they share one common processing logic that
- calls the solvers it needs depending on which solver was called

Besides, a shape check was introduced and the `info`-variable was extended.
All of this is now reflected in `core`, where the solvers and errors are not structured a bit better.
… auxiliary functions in `core`; added NumPy-solver error case test
@MothNik
Copy link
Author

MothNik commented Jun 11, 2024

To be able to cope with #28 and #29 in the future without compromising anything on the solvers side, I rearranged the Cython core (not the interface) as follows:

  • enums were used to provide additional safety for value comparisons and also to make the code more concise by increasing verbosity

  • the c_penta_solver1 and c_penta_solver2 interface are now calling two common functions that

    • factorize the matrix A with the respective PTRANS-factorization (I for the first, II for the second)
    • run the (parallelized) loop to solve the transformed right-hand side with this factorization, again with the respective solver

    This allowed the removal of the redundant logic (factorize -> solve) that was implemented in both the c_penta_solver1 and c_penta_solver2 interface, thereby removing one critical potential point of failure.

  • these two common functions are now responsible for all memory allocation and they return the freshly allocated memoryviews of the factorized matrix and the result, respectively. Before that, memory allocation was handled via np.empty in both the c_penta_solver1 and c_penta_solver2 interface.
    This in turn now comes with the added bonus, that other interface functions for

    • factorization
    • solving a right-hand side given an already factorized matrix

    can easily be implemented on top and just accept the fresh memoryviews that are return by the common functions. All that's required now for such an interface function is that it calls the common function and casts the returned memoryview to an Array.
    With these changes, the computation

    can be triggered from Python and Cython after a factorization was obtained.

This is probably a bit too abstract (I barely know how phrase it), so please let me provide the following
Analogy Example
In the basic principle, this is identical to how the banded Cholesky decomposition of SciPy works. For solving a system, there is scipy.linalg.solveh_banded. However, there is also scipy.linalg.cholesky_banded that can perform a banded Cholesky factorization and scipy.linalg.cho_solve_banded that can take the banded Cholesky factorization to solve one or more systems.

As of now, this PR basically implements the scipy.linalg.solveh_banded-analaogy of pentay (factorize and solve in one go), but on the Cython-side I arranged everything that it's super simple to also add the analogies of scipy.linalg.cholesky_banded (factorize only) and scipy.linalg.cho_solve_banded (solve system using a given factorization). But this would probably explode the frame of this already big PR 🤯

@MothNik
Copy link
Author

MothNik commented Jun 11, 2024

@MuellerSeb Sorry for all these iterations. It tooks me some time to get the right strategy for Cython, the GIL, Memory Allocation, the Parallelization, and a testing framework (they all pass still ✅).
But now I feel that this PR is ready for review and I would highly appreciate your feedback 😄 I really wanted to keep everything flexible for future adjustments. It might be true that pentapy is "only" there for solving pentadiagonal systems, so the number of features will never be as mindblowing as SciPy, but playing with the factorizations (like with the determinant) is a common step in mathematical problems and I really wanted to pave way for that 🛣️

…trix

Reason: it is the main diagonal of the matrix L for A = LU just like ps  is the main diagonal of U for PTRANS-II.
So now, the main diagonals of the non unit triangular factors are always in the central column which makes the most sense.
@MothNik MothNik marked this pull request as ready for review June 12, 2024 05:40
- avoided performance pitfall of `except * nogil` in Cython implementation of the solvers
- replaced this statement by `noexcept nogil`
Copy link
Member

@MuellerSeb MuellerSeb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gave it a first view. Was not to much in depth, since I got no time at this point as mentioned. I really like your work and it seems to be solid and improves the package.


- fully parallelized the Cython implementation of PTRANS-I and PTRANS-II for single and multiple right-hand sides support that can now be enabled via the new ``workers`` parameter in ``pentapy.solve`` (default: 1)
- fully typed the ``pentapy.tools`` module
- updated the **Cython low level interfaces** to PTRANS-I and PTRANS-II to **only accept C-contiguous arrays** (not backwards compatible)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only reason for the major version jump, right? Do you want an intermediate 1.4 release? If not, I would vote for merging the changes of 1.4 and 2.0 here.

@@ -107,7 +107,8 @@ Have a look at the script: [``examples/03_perform_simple.py``](https://github.co

## Requirements:

- [NumPy >= 1.14.5](https://www.numpy.org)
- [NumPy >= 1.20.0](https://www.numpy.org)
- [psutil >= 5.8.0](https://psutil.readthedocs.io/en/latest/) (for parallelisation)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/en/latest/ needed?

@@ -94,7 +94,8 @@ The performance plot was created with ``perfplot`` (`link <https://github.com/ns
Requirements
============

- `Numpy >= 1.14.5 <http://www.numpy.org>`_
- `Numpy >= 1.20.0 <http://www.numpy.org>`_
- `psutil >= 5.8.0 <https://psutil.readthedocs.io/en/latest/>`_
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/en/latest/ needed?

@@ -0,0 +1,2 @@
[pytest]
addopts = --doctest-modules
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't this file content be part of pyproject.toml?

Comment on lines -40 to 42
[project.optional-dependencies]
scipy = ["scipy"]
umfpack = ["scikit-umfpack"]
all = [
"scipy",
"scikit-umfpack",
]
doc = [
"m2r2>=0.2.8",
"scipy>=1.1.0",
"matplotlib>=3",
"perfplot<0.9",
"numpydoc>=1.1",
"sphinx>=7",
"sphinx-gallery>=0.8",
"sphinx-rtd-theme>=2",
]
test = ["pytest-cov>=3"]
check = [
"black>=24,<25",
"isort[colors]",
"pylint",
"cython-lint",
]
[tool.setuptools.dynamic]
dependencies = {file = ["requirements/base.txt"]}
optional-dependencies = {scipy = {file = ["requirements/scipy.txt"]}, umfpack = {file = ["requirements/umfpack.txt"]}, all = {file = ["requirements/all.txt"]}, doc = {file = ["requirements/doc.txt"]}, test = {file = ["requirements/test.txt"]}, check = {file = ["requirements/check.txt"]}}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to admit, I am not a fan of these req files. I would like to use my veto here to have the deps in the pyproject.toml file.

Comment on lines +11 to +16
if sys.platform.startswith("win"):
openmp_arg = "/openmp"
else:
openmp_arg = "-fopenmp"

Cython.Compiler.Options.annotate = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use https://github.com/astropy/extension-helpers/ here an only activate it with a env-var. OpenMP shouldn't be used by default. See here for example: https://discuss.python.org/t/conflicting-binary-extensions-in-different-packages/25332

See here for reference: https://github.com/GeoStat-Framework/GSTools/blob/v1.6.0/setup.py

src/pentapy/__init__.py Show resolved Hide resolved
src/pentapy/errors.py Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants