-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build problem with MPI library #198
Comments
Hi Rob, We currently support only GCC or LLVM/Clang, and OpenMPI or MVAPICH2. Intel MPI will probably work as long as it supports MPI3 non-blocking collectives, but we'll need to update our configure script to identify the correct libraries to link against (it doesn't use the mpicc/mpicxx wrappers). I'm not sure about the Intel compiler---we depend on some GCC directives which it may or may not support. We no longer support GASNet. If you could get us an account on a machine with Intel compiler/MPI we could probably get these things debugged. |
Jacob's right, we don't use the Intel compiler, so can't make any promises. CMake ships with some pretty weak scripts to try to find libraries like MPI. It looks like you might have better luck with the HDF FindMPI.cmake, which seems to know about Intel MPI. I think to use it, you'd have to download that FindMPI.cmake file, and add the path to it to
|
Thanks, Jacob. I don’t mind building with GCC initially, but will want to use Intel compilers when in the future. That’s kind of a nice thing to do when you’re an Intel employee ☺. Rob From: Jacob Nelson [mailto:notifications@github.com] Hi Rob, We currently support only GCC or LLVM/Clang, and OpenMPI or MVAPICH2. Intel MPI will probably work as long as it supports MPI3 non-blocking collectives, but we'll need to update our configure script to identify the correct libraries to link against (it doesn't use the mpicc/mpicxx wrappers). I'm not sure about the Intel compiler---we depend on some GCC directives which it may or may not support. We no longer support GASNet. If you could get us an account on a machine with Intel compiler/MPI we could probably get these things debugged. — |
Thanks, Brandon. I’ll try that HDF solution. Will let you know how I fare. Rob From: Brandon Holt [mailto:notifications@github.com] Jacob's right, we don't use the Intel compiler, so can't make any promises. CMake ships with some pretty weak scripts to try to find libraries like MPI. It looks like you might have better luck with the HDF FindMPI.cmakehttp://www.hdfgroup.org/ftp/HDF5/hdf-java/hdf-java-examples/config/cmake/FindMPI.cmake, which seems to know about Intel MPI. I think to use it, you'd have to download that FindMPI.cmake file, and add the path to it to CMAKE_MODULE_PATH (it might work to put it in environment variable, or you can pass it to CMake through our configure script: ./configure ... -- -DCMAKE_MODULE_PATH=/path/to/new/findmpi — |
Great! As I said, we'd be happy to add explicit support for the Intel compiler and MPI; we just need a way to test it. |
There are multiple ways to do the testing with Intel tools, Jacob. The easiest would be to obtain a free trial licensehttps://software.intel.com/en-us/articles/try-buy-tools, valid for 30 days. Next best would be to sit together (virtually) to figure things out. Getting access to a machine we own is also possible, but more work. Let’s try that if the other two options do not work. From: Jacob Nelson [mailto:notifications@github.com] Great! As I said, we'd be happy to add explicit support for the Intel compiler and MPI; we just need a way to test it. — |
No success with downloading HDF's FindMPI.cmake. I put it in $PWD/CMakeFiles and then defined an environment variable: "export CMAKE_MODULE_PATH=$PWD/CMakeFiles" |
I also tried with double leading hyphen --DCMAKE_MODULE_PATH and --CMAKE_MODULE_PATH, to no avail. |
Sometimes with CMake you also have to make sure you delete the build directory it's operating on before trying changes like this. Sorry I can't be more helpful than that right now. I'll see if I can find some time to test it out myself later. |
Thanks, Brandon. Yes, I delete build each time it gets created before I do a new test--most of the time configure chokes before it gets to create build, though :). Please let me know if it would help to debug this issue via desktop sharing. I did find, by looking inside configure, that there are no standard options dealing with Cmake paths. |
Okay good. Yeah I mean the idea here is that this line should be passing all the extra args through to the CMake command, so that's why I think they have to go after the |
Right, and that is why I also tried it with the preceding "--" but have not got that to work yet. |
Could you take another look at the findMPI issue I have been experiencing? The only thing I need is a way to influence the priority of the paths Cmake uses for its search files. I would really like to get over this hump. You should be able to test this with any Cmake find file, doesn’t have to be MPI. I’ll be out of the office next week, but want to hit the ground running when I return. Thanks! Rob From: Brandon Holt [mailto:notifications@github.com] Okay good. Yeah I mean the idea here is that this linehttps://github.com/uwsampa/grappa/blob/master/configure#L146 should be passing all the extra args through to the CMake command, so that's why I think they have to go after the --. — |
The best thing to do is for us to install the intel tools here and debug---I expect as soon as we fix the MPI discovery problem we'll run into compiler flag problems. I should have time to get them installed today or Monday, and then Brandon and I can take a look. |
Splendid, thanks, Jacob. When you get to the Intel web site, it’ll be easiest to request a trial version of the cluster software quite, which includes the compilers and MPI, and ITAC (Intel Trace Analyzer and Collector). If you need the tools for more than 30 days, let me work on this side to get you a truly free version, not just the temporary version—can’t promise I’ll succeed, though. Rob From: Jacob Nelson [mailto:notifications@github.com] The best thing to do is for us to install the intel tools here and debug---I expect as soon as we fix the MPI discovery problem we'll run into compiler flag problems. I should have time to get them installed today or Monday, and then Brandon and I can take a look. — |
We certainly wouldn't turn away a license. If we're going to try to maintain compatibility then we'll need more than 30 days. Surprisingly enough, I'll also be out of commission the next couple weeks. I'll see if I can find time to look into the CMake thing this afternoon, but it's something that's not Grappa-specific at all, so other forums may have the answer.
|
I've made some progress with this. I installed the eval version of (I think this is the right name) Intel Parallel Studio XE Cluster Edition 2015 Update 1 on our cluster, and managed to get Grappa code to run with Intel MPI, but not with the Intel compiler. More details: Intel MPI with GCC 4.8 This just worked once I sourced the Intel MPI variable file, even without changing the MPI detection code in the configure script. I verified this by looking at the libraries listed in the
After this I was able to configure, make and run (with Intel MPI with Intel compiler I ran into both a compilation error, and once I worked around that, a segfault. The source of the segfault is not yet clear. it's showing up in a place where a segfault shouldn't be possible, but it's in the worker spawn code, and I know our stack switching code has confused other compilers in the past. I'll have to spend some more time digging into this. I got the configure script to pick up the Intel compiler by doing this:
|
That is great progress, Jacob. I’ll try to duplicate your effort on my cluster. Will keep you posted. Rob From: Jacob Nelson [mailto:notifications@github.com] I've made some progress with this. I installed the eval version of (I think this is the right name) Intel Parallel Studio XE Cluster Edition 2015 Update 1 on our cluster, and managed to get Grappa code to run with Intel MPI, but not with the Intel compiler. More details: Intel MPI with GCC 4.8 This just worked once I sourced the Intel MPI variable file, even without changing the MPI detection code in the configure script. I verified this by looking at the libraries listed in the Found MPI_C and Found MPI_CXX lines printed by the configure script. I did have to set a few environment variables to get MPI to work with our cluster's scheduler and network: source /sampa/share/intel-cluster-studio/impi_latest/bin64/mpivars.sh export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so # to support Slurm srun export I_MPI_FABRICS='shm:ofa' # supported fabric list on our cluster After this I was able to configure, make and run (with grappa_run) some of the demo programs. I didn't look into performance at this point. Intel MPI with Intel compiler I ran into both a compilation error, and once I worked around that, a segfault. The source of the segfault is not yet clear. it's showing up in a place where a segfault shouldn't be possible, but it's in the worker spawn code, and I know our stack switching code has confused other compilers in the past. I'll have to spend some more time digging into this. — |
Hi Jacob, Unfortunately, what worked for you didn’t for me—I should have known, since I always source my Intel MPI resource files in my login shell. Changing the compilers to gcc/g++ doesn’t affect the search for the MPI library, though I did try without the Intel compilers. Still searching. Rob From: Jacob Nelson [mailto:notifications@github.com] I've made some progress with this. I installed the eval version of (I think this is the right name) Intel Parallel Studio XE Cluster Edition 2015 Update 1 on our cluster, and managed to get Grappa code to run with Intel MPI, but not with the Intel compiler. More details: Intel MPI with GCC 4.8 This just worked once I sourced the Intel MPI variable file, even without changing the MPI detection code in the configure script. I verified this by looking at the libraries listed in the Found MPI_C and Found MPI_CXX lines printed by the configure script. I did have to set a few environment variables to get MPI to work with our cluster's scheduler and network: source /sampa/share/intel-cluster-studio/impi_latest/bin64/mpivars.sh export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so # to support Slurm srun export I_MPI_FABRICS='shm:ofa' # supported fabric list on our cluster After this I was able to configure, make and run (with grappa_run) some of the demo programs. I didn't look into performance at this point. Intel MPI with Intel compiler I ran into both a compilation error, and once I worked around that, a segfault. The source of the segfault is not yet clear. it's showing up in a place where a segfault shouldn't be possible, but it's in the worker spawn code, and I know our stack switching code has confused other compilers in the past. I'll have to spend some more time digging into this. — |
Too bad! I'll take a look at the MPI detection code and see if I can offer any hints. If possible, could you paste in your $PATH and $LD_LIBRARY_PATH variables? Maybe we can identify an ordering problem. Happy New Year to you too! |
Sure, Jacob: [rfvander@bar1 bin]$ echo $PATH Please promise you won’t work on this tonight ☺. Rob From: Jacob Nelson [mailto:notifications@github.com] Too bad! I'll take a look at the MPI detection code and see if I can offer any hints. If possible, could you paste in your $PATH and $LD_LIBRARY_PATH variables? Maybe we can identify an ordering problem. Happy New Year to you too! — |
Don't worry; going home shortly. :-) Those variables look fine. Another thing to try: run the configure script with tracing enabled. There will be a bunch of output from the FindMPI.cmake module, and we may identify something there. Send me a copy of the output over email (just in case anything sensitive ends up in the trace). Here's the command I used to do this:
|
Thanks, Jacob. I sent the output via email. Rob From: Jacob Nelson [mailto:notifications@github.com] Don't worry; going home shortly. :-) Those variables look fine. Another thing to try: run the configure script with tracing enabled. There will be a bunch of output from the FindMPI.cmake module, and we may identify something there. Send me a copy of the output over email (just in case anything sensitive ends up in the trace). Here's the command I used to do this: ./configure -- --trace 2>&1 | tee cmake-trace.txt — |
Rob, try configuring Grappa with a command like this (with the paths fixed for your Intel MPI installation, of course):
|
If that doesn't work, according to the
Relevant docs from
and
|
For further reference, when I configure using GCC48 and the Intel MPI, these are what the CMake
and for C (not sure if these are important for our build):
The FindMPI code discovers these by running |
I should also point out that in my bash environment I have the environment variables
|
New issue, configure chokes on -cc: |
Hi Jacob, Sadly, none of this worked. Configure keeps not wanting to recognize MPI_C(XX)_LIBRARIES, even though I define them (I have already exported CC and CXX, and Cmake does find those): -- Configuring incomplete, errors occurred! From: Jacob Nelson [mailto:notifications@github.com] If that doesn't work, according to the FindMPI.cmake source, we should be able to set the include/library paths and flags directly like this: ./configure -- -DMPI_CXX_INCLUDE_PATH=(something) -DMPI_CXX_LINK_FLAGS=(something) -DMPI_CXX_COMPILE_FLAGS=(something) -DMPI_CXX_LIBRARIES=(something) Relevant docs from FindMPI.cmake: === Variables ===This module will set the following variables per language in your project,where is one of C, CXX, or Fortran:MPI__FOUND TRUE if FindMPI found MPI flags forMPI__COMPILER MPI Compiler wrapper forMPI__COMPILE_FLAGS Compilation flags for MPI programsMPI__INCLUDE_PATH Include path(s) for MPI headerMPI__LINK_FLAGS Linking flags for MPI programsMPI__LIBRARIES All libraries to link MPI programs againstand === Usage ===To use this module, simply call FindMPI from a CMakeLists.txt file, orrun find_package(MPI), then run CMake. If you are happy with the auto-detected configuration for your language, then you're done. If not, youhave two options:1. Set MPI__COMPILER to the MPI wrapper (mpicc, etc.) of yourchoice and reconfigure. FindMPI will attempt to determine all thenecessary variables using THAT compiler's compile and link flags.2. If this fails, or if your MPI implementation does not come witha compiler wrapper, then set both MPI__LIBRARIES andMPI__INCLUDE_PATH. You may also set any other variableslisted above, but these two are required. This will circumventautodetection entirely.When configuration is successful, MPI__COMPILER will be set to thecompiler wrapper for , if it was found. MPI__FOUND and othervariables above will be set if any MPI implementation was found for ,regardless of whether a compiler was found.— |
And this is what happens if I put things on the command line (mpigcc and mpigxx are in my path): -- Configuring incomplete, errors occurred! From: Jacob Nelson [mailto:notifications@github.com] I should also point out that in my bash environment I have the environment variables CC=gcc and CXX=g++ set. You can also set these with a single option to our configure script, like this: ./configure --cc=/sampa/share/gcc-4.8.2/rtf/bin/gcc -- -DMPI_C_COMPILER=/sampa/share/intel-cluster-studio/impi_latest/bin64/mpigcc -DMPI_CXX_COMPILER=/sampa/share/intel-cluster-studio/impi_latest/bin64/mpigxx — |
Hi Jacob, I am now trying to build your implementation of synch_p2p using the uts example in the grappa repo as an example. However, uts as described in README-Grappa.md does not build.
Then I tried another example, sort, which doesn’t have a Makefile. Then I looked at isopath, which has a grappa subdirectory with a Makefile. Typing make there produced the following: [rfvander@bar1 grappa]$ make Makefile:9: //include.mk: No such file or directory Makefile:41: //system/Makefile: No such file or directory Makefile:78: warning: overriding recipe for target `run' Makefile:75: warning: ignoring old recipe for target `run' make: *** No rule to make target `//system/Makefile'. Stop. Perhaps it is time for a little primer how to build a grappa application? Thanks. Rob |
Hello Jacob, While the build problem is now resolved on my research cluster, I am having continued problems with building on my production cluster. It does not have access to the Internet, so I downloaded the third party packages and built using –no-download. I also specify all the compilers in the same way as on my research cluster, but I keep getting error messages. As you can see (I added the definition of environment variables that I set before building), Cmake cannot find MPI_CXX or MPI_CXX_LIBRARIES, even though these variables are explicitly defined. Could you give me an idea how to work around this problem? Ultimately, I want to compare timings, and I won’t be able to do that on our research cluster. Thanks. Rob [rfvander@eln4 grappa]$ \rm -rf build/ -- Configuring incomplete, errors occurred! From: Jacob Nelson [mailto:notifications@github.com] As for the subject of this ticket: When we last talked we had two problems:
It appears that you've made progress on one of both of these? What happened? I would like to figure out more of what was going wrong with MPI discovery so I can file a bug with the CMake folks. — |
This issue is still open for me, unfortunately. The only grappa codes I have been able to build are integrated in your package, and as such are not a model for what an application developer would do. Could you send me a simple example: a tar with just an example makefile and a source code? Thanks. Rob From: Van Der Wijngaart, Rob F Hi Jacob, I am now trying to build your implementation of synch_p2p using the uts example in the grappa repo as an example. However, uts as described in README-Grappa.md does not build.
Then I tried another example, sort, which doesn’t have a Makefile. Then I looked at isopath, which has a grappa subdirectory with a Makefile. Typing make there produced the following: [rfvander@bar1 grappa]$ make Makefile:9: //include.mk: No such file or directory Makefile:41: //system/Makefile: No such file or directory Makefile:78: warning: overriding recipe for target `run' Makefile:75: warning: ignoring old recipe for target `run' make: *** No rule to make target `//system/Makefile'. Stop. Perhaps it is time for a little primer how to build a grappa application? Thanks. Rob |
Hi Rob, We're taking a moment to remove some complexity from our build system before updating the docs with details on adding new code. I'll get back to you shortly. |
Great, thanks, Jacob. I hope you’re not getting frustrated with all my questions, and hope that the result of all of this will be that Grappa will be easier to use for everybody. Rob From: Jacob Nelson [mailto:notifications@github.com] Hi Rob, We're taking a moment to remove some complexity from our build system before updating the docs with details on adding new code. I'll get back to you shortly. — |
Not at all! It's immensely helpful. I just hope I can make progress fast enough to keep you interested while not neglecting my other responsibilities. :-) Can we schedule some screen-sharing time to debug the MPI problem? |
Absolutely! I’ll send an invite if you give me an indication of your availability. Thanks, Jacob. From: Jacob Nelson [mailto:notifications@github.com] Not at all! It's immensely helpful. I just hope I can make progress fast enough to keep you interested while not neglecting my other responsibilities. :-) Can we schedule some screen-sharing time to debug the MPI problem? — |
After further debugging, we've determined that this MPI detection error is due to a bug in the Intel mpicc wrapper script---in versions prior to 5.0.2 it doesn't propagate errors from the underlying compiler, which confuses CMake's MPI detection script. I see three ways to solve this now:
This works for me when I hack my mpicc script and works for one of the users in the CMake bug report, but it could behave differently on your system if something else is also going on. Note that gcc/g++ here must be at least version 4.7.2. |
Thanks, Jacob. I could confirm that the proper error propagation does work for MPI version 5.0.2, and not for the version I was using earlier. The difference is in the mpigcc scripts, not mpicc. So I am pointing to the newer MPI now. I’d like to note, though, that we ultimately want to link with the Intel compilers, not GNU. Rob From: Jacob Nelson [mailto:notifications@github.com] After further debugging, we've determined that this MPI detection error is due to a bug in the Intel mpicc wrapper script---in versions prior to 5.0.2 it doesn't propagate errors from the underlying compiler, which confuses CMake's MPI detection script. I see three ways to solve this now:
CC=gcc CXX=g++ ; ./configure -- -DMPI_C_COMPILER=mpigcc -DMPI_CXX_COMPILER=mpigxx This works for me when I hack my mpicc script and works for one of the users in the CMake bug report, but it could behave differently on your system if something else is also going on. Note that gcc/g++ here must be at least version 4.7.2. — |
Great! When you're able to build a run a binary we can close the ticket. As for using the Intel compiler, I'll track that in #205. |
Sadly, while configure now breezed through, the build failed. Here is the end of the build output. Rob common.copy /panfs/panfs3/users3/rfvander/grappa/build/Make+Release/third-party/lib/libboost_prg_exec_monitor.a From: Jacob Nelson [mailto:notifications@github.com] Great! When you're able to build a run a binary we can close the ticket. As for using the Intel compiler, I'll track that in #205#205. — |
Would you verify that your GCC version is >= 4.7.2 with |
It isn’t, just checked, so I’ll move to 4.9, which is available in the corner of my system. Rob From: Jacob Nelson [mailto:notifications@github.com] Would you verify that your GCC version is >= 4.7.2 with gcc --version? — |
Sigh. This is what happens when I upgrade to gcc v 4.9 and use the latest MPI compiler: [rfvander@eln4 grappa]$ ./configure --no-downloads Run Build Command:"/usr/bin/gmake" "cmTryCompileExec2839858443/fast" /opt/crtdc/cmake/3.0.2/bin/cmake -E cmake_progress_report /opt/crtdc/gcc/gcc-4.9.2/libexec/gcc/x86_64-unknown-linux-gnu/4.9.2/cc1: gmake: *** [cmTryCompileExec2839858443/fast] Error 2 CMake will not be able to correctly generate this project. -- Configuring incomplete, errors occurred! From: Jacob Nelson [mailto:notifications@github.com] Would you verify that your GCC version is >= 4.7.2 with gcc --version? — |
Would you verify that you can build a simple plain C program with GCC 4.9, and a MPI program with mpigcc and GCC 4.9? The library it's complaining about is part of GCC, so if mpigcc can't find GCC's library include paths we would expect this sort of error. |
Oh, and it looks like this git clone doesn't have the SHMMAX fix---you should do a pull to get the latest bits. |
Right, that’s the problem. I’ve poked around, but nothing compiles with this version of gcc on our system. I’m asking the admins to install a new version, or patch up the one we have. Rob [rfvander@eln4 Transpose]$ more test.c From: Jacob Nelson [mailto:notifications@github.com] Would you verify that you can build a simple plain C program with GCC 4.9, and a MPI program with mpigcc and GCC 4.9? The library it's complaining about is part of GCC, so if mpigcc can't find GCC's library include paths we would expect this sort of error. — |
Will do, thanks. Rob From: Jacob Nelson [mailto:notifications@github.com] Oh, and it looks like this git clone doesn't have the SHMMAX fix---you should do a pull to get the latest bits. — |
OK, Jacob, progress on my production cluster. It turns out that not all compiler dependences were set. I won’t bore you with the details, but suffice it to say that after correcting that, and after pulling the new bits, I could configure and make grappa (of course, I also needed to do the no-downloads hack). But I could not build hello_world. Error log attached. [rfvander@eln4 Make+Release]$ make demo-hello_world 2> error.log Oh, and it looks like this git clone doesn't have the SHMMAX fix---you should do a pull to get the latest bits. — |
Great! Simon and I just saw this error in a VM he was building. It went away when we blew away the build directory and rebuilt (with no -j flag specified). I'm trying to reproduce it again. Would you try running like this and send me the output?
And if convenient would you tar up your build directory and send it to me too? I'll point you at a FTP server you can drop it on. |
Um, OK, you may be excited, but I just want to build the darn thing ☺. Anyway, I’ll do the whole thing again and stuff everything you need at that FTP site. And after that I’ll do the non-parallel make so I can finally build and run some Grappa code! I’m at home with slow internet now, so won’t be moving the gobs of data to your FTP site until tomorrow at work. Rob From: Jacob Nelson [mailto:notifications@github.com] Great! Simon and I just saw this error in a VM he was building. It went away when we blew away the build directory and rebuilt (with no -j flag specified). I'm trying to reproduce it again. Would you try running like this and send me the output? make demo-hello_world VERBOSE=1 And if convenient would you tar up your build directory and send it to me too? I'll point you at a FTP server you can drop it on. — |
Hey, I'm just excited that I could reproduce this one. You can skip sending the build directory since I've been able to repeat it now. |
OK, Jacob, back at my desk now. So I should not do anything at this point? Rob From: Jacob Nelson [mailto:notifications@github.com] Hey, I'm just excited that I could reproduce this one. You can skip sending the build directory since I've been able to repeat it now. — |
Yes, hold off for a bit---the serial build doesn't avoid the problem. I'll let you know when my debugging yields something. |
OK, I’ll immediately start doing nothing, then. From: Jacob Nelson [mailto:notifications@github.com] Yes, hold off for a bit---the serial build doesn't avoid the problem. I'll let you know when my debugging yields something. — |
What did you do differently that repeated it? On Wed, Jan 28, 2015 at 10:42 AM, Jacob Nelson notifications@github.com
|
(replying belatedly to Simon: I built Boost from scratch rather than pointing at an already-installed version. It turns out our build system made different assumptions in that case which are no longer true.) |
Okay, Rob, I just merged some more build system changes into master. If you pull and rebuild you should not get the error you saw. I'll have a makefile solution shortly as well. |
Terrific, Jacob. I really appreciate your efforts and patience. I’ll let you know how I fare. Rob From: Jacob Nelson [mailto:notifications@github.com] Okay, Rob, I just merged some more build system changes into master. If you pull and rebuild you should not get the error you saw. I'll have a makefile solution shortly as well. — |
The sweet smell of success! Have a good weekend, Jacob. Rob [rfvander@eln4 Make+Release]$ make demo-hello_world From: Jacob Nelson [mailto:notifications@github.com] Okay, Rob, I just merged some more build system changes into master. If you pull and rebuild you should not get the error you saw. I'll have a makefile solution shortly as well. — |
I downloaded Grappa and am now trying to build it, but instructions are a bit sparse. If I define symbols CC and CXX to resolve to the Intel compilers icc and icpc, respectively, I get the error message below. Obviously, my installed MPI cannot be found. I tried to fix that by setting: “export MPICC=mpiicc” but that did not work, nor did “export MPI_C=mpiicc”. There is no reference to MPI in “configure” or in “FindPackageHandleStandardArgs.cmake “. Do you have any suggestions? By the way, I also have GASNet installed, so if that is the better communication layer, I'll use that--if I can get some instructions how to do that. Thanks.
Rob
[rfvander@bar1 grappa]$ export CC=icc
[rfvander@bar1 grappa]$ export CXX=icpc
[rfvander@bar1 grappa]$ ./configure --gen=Make --mode=Release
cmake /lustre/home/rfvander/grappa -G"Unix Makefiles" -DSHMMAX=33554432 -DCMAKE_C_COMPILER=icc -DCMAKE_CXX_COMPILER=icpc -DBASE_C_COMPILER=icc -DBASE_CXX_COMPILER=icpc -DCMAKE_BUILD_TYPE=RelWithDebInfo
-- The C compiler identification is Intel 15.0.0.20140723
-- The CXX compiler identification is Intel 15.0.0.20140723
-- Check for working C compiler: /opt/intel/tools/composer_xe_2015.0.090/bin/intel64/icc
-- Check for working C compiler: /opt/intel/tools/composer_xe_2015.0.090/bin/intel64/icc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /opt/intel/tools/composer_xe_2015.0.090/bin/intel64/icpc
-- Check for working CXX compiler: /opt/intel/tools/composer_xe_2015.0.090/bin/intel64/icpc -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Boost found: 1.53.0 -- /usr
CMake Error at /usr/share/cmake/Modules/ FindPackageHandleStandardArgs.cmake:108 (message):
Could NOT find MPI_C (missing: MPI_C_LIBRARIES)
Call Stack (most recent call first):
/usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake/Modules/FindMPI.cmake:587 (find_package_handle_standard_args)
CMakeLists.txt:205 (find_package)
-- Configuring incomplete, errors occurred!
See also "/lustre/home/rfvander/grappa/build/Make+Release/CMakeFiles/CMakeOutput.log".
The text was updated successfully, but these errors were encountered: