Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

broken kwant builds because of MPI build #21

Closed
basnijholt opened this issue Nov 17, 2017 · 8 comments
Closed

broken kwant builds because of MPI build #21

basnijholt opened this issue Nov 17, 2017 · 8 comments

Comments

@basnijholt
Copy link
Contributor

kwant currently has no MUMPS support (even though MUMPS should do nearly ALL the heavy lifting there), I thought I had fixed it in the latest build.

We used to link against mpiseq however that library isn't being build anymore. Linking against the MPI versions will result in errors.

Before we linked against: zmumps mumps_common pord metis esmumps scotch scotcherr gfortran

Now I did: ptesmumps ptscotch ptscotcherr mpi mpifort scalapack openblas pthread plus all the former libs.

When linking agaist the MPI libs, kwant test error on:

Attempting to use an MPI routine before initializing MPICH

@jbweston and @michaelwimmer do you think we should build the sequential version of MUMPS again?

Related: conda-forge/kwant-feedstock#34

@jbweston
Copy link
Contributor

I think that this is the most sensible course of action. Debian has several different packages for MUMPS, each with a different build configuration (sequential/parallel, with/without scotch), so this seems something common

@basnijholt
Copy link
Contributor Author

basnijholt commented Nov 17, 2017

@pstjohn I see that ipopt is also linked against the sequential version of MUMPS.

@minrk do you think we should revert the MPI version MUMPS and make a separate package? And are there any libraries where you already use MUMPS with MPI support?

@pstjohn
Copy link
Contributor

pstjohn commented Nov 17, 2017

Despite maintaining the ipopt feedstock I don't have a great idea as to its particular requirements for the MPI build requirements. Is this related to #6?

@minrk
Copy link
Member

minrk commented Nov 19, 2017

That sounds like splitting serial and parallel mumps into separate packages is the right thing to do. I haven't used parallel mumps yet (PETSc is where I want to use it next).

@minrk
Copy link
Member

minrk commented Jan 17, 2018

I'd like to take another stab at getting parallel mumps. Scotch is currently building ptscotch in a branch. I could do the same with mumps-parallel here. However, looking at debian it seems like maybe mumps canonically should be parallel, and mumps-seq should be the non-mpi variant, or should we let history decide who gets 'mumps' and call it 'mumps-parallel' or 'mumps-mpi' or something?

@basnijholt
Copy link
Contributor Author

should we let history decide who gets 'mumps'

That's probably a good idea, mostly for backward compatibility.

mumps-mpi or mumps-parallel, both sounds good to me.

@minrk
Copy link
Member

minrk commented Jan 29, 2018

mumps-mpi will be created by merging #24

@minrk minrk closed this as completed Jul 13, 2018
@minrk
Copy link
Member

minrk commented Jul 13, 2018

This should be fixed by #24

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants