Skip to content

Commit

Permalink
Change sparse evp test target back, add troubleshooting info about AR…
Browse files Browse the repository at this point in the history
…PACK convergence errors
  • Loading branch information
kburns committed Jul 28, 2022
1 parent 14de4ed commit 8b8cfdd
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 2 deletions.
2 changes: 1 addition & 1 deletion dedalus/tests/test_evp.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def test_waves_1d(x_basis_class, Nx, dtype, sparse):
solver = solvers.EigenvalueSolver(problem, matrix_coupling=[True])
Nmodes = 4
if sparse:
solver.solve_sparse(solver.subproblems[0], N=Nmodes, target=1)
solver.solve_sparse(solver.subproblems[0], N=Nmodes, target=1.1)
else:
solver.solve_dense(solver.subproblems[0])
i_sort = np.argsort(solver.eigenvalues)
Expand Down
10 changes: 9 additions & 1 deletion docs/pages/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,20 @@ This error indicates that some degrees of freedom of the solution are unconstrai
These errors are often due to imposing boundary conditions that are redundant for some set of modes and/or failing to constrain a gauge freedom in the solution.
See the :doc:`gauge_conditions` and :doc:`tau_method` pages for more information on fixing these issues.

Sparse EVP accuracy or convergence errors
=========================================

If you have accuracy or convergence errors (ARPACK error 3) with the sparse EVP, the problem may be that you are targeting an exact eigenvalue.
This can make the shift-invert formulation used within Dedalus exactly singular and return poor results.
A simple solution is to offset your target eigenvalue slightly away from the exact eigenvalue of the system.

Out of memory errors
====================

Spectral simulations with implicit timestepping can require a large amount of memory to store the LHS matrices and their factorizations.
The best way to minimize the required memory is to minimize the LHS matrix size by using as few variables as possible and to minimize the LHS matrix bandwidth (see the :doc:`performance_tips` page).
Beyond this, several of the Dedalus configuration options can be changed the minimize the simulation's memory footprint, potentially at the cost of reduced performance (see the :doc:`configuration` page).
The next step is to try using multistep timesteppers rather than Runge-Kutta timesteppers, since the latter require storing matrix factorizations for each internal stage.
Beyond this, several of the Dedalus configuration options can be changed to minimize the simulation's memory footprint, potentially at the cost of reduced performance (see the :doc:`configuration` page).

Reducing memory consumption in Dedalus is an ongoing effort.
Any assistance with memory profiling and contributions reducing the code's memory footprint would be greatly appreciated!
Expand Down

0 comments on commit 8b8cfdd

Please sign in to comment.