Skip to content

OpenMpp HPC Test Lab

amc1999 edited this page May 12, 2021 · 2 revisions

Obsolete

HPC cluster Test Lab not available after October 2018. Instructions below outdated but may be useful as example of development test lab on Linux.

Where is OpenM++ and HPC cluster Test Lab

  • Download: binary files
  • Source code: openM++ git
  • Documentation: this wiki
  • HPC cluster (test lab): ssh -p 4022 USER@omm.some-where.com

OpenM++ HPC cluster test lab consists of:

  • master node and 2 quad cores computational nodes.
  • all nodes running 64bit Centos 7 and Open MPI.
  • computational node names are: om1.some-where.com, om2.some-where.com
  • shared directory to put executatbles: /mirror
  • special user to run the tests on cluster: mpi
  • script to run on cluster: /mirror/omrun
  • cluster hosts description: /mirror/mpihosts

Please read Quick Start for OpenM++ Developers first. Additional information can be found in Linux section of Setup Development Environment.

Login to OpenM++ HPC cluster

To login on OpenM++ test lab cluster:

ssh -p 4022 USER@omm.some-where.com

If you are on Windows and using putty, please put following setting here:

server name: omm.some-where.com
port: 4022
Window -> Translation -> Remote Charter Set: UTF-8

Check your Environment

To verify OpenMPI is working:

 module load mpi/openmpi-x86_64
 mpirun -H omm,om1,om2 uname -n

expected output:

 omm.some-where.com
 om1.some-where.com
 om2.some-where.com

To verify c++ and OpenMPI development environment compile MPI Hello, World:

#include <iostream>
#include <mpi.h>

using namespace std;

int main(int argc, char ** argv)
{
    int mpiCommSize;
    int mpiRank;
    int procNameLen;
    char procName[MPI_MAX_PROCESSOR_NAME];

    MPI_Init(&argc, &argv);

    MPI_Comm_size(MPI_COMM_WORLD, &mpiCommSize);
    MPI_Comm_rank(MPI_COMM_WORLD, &mpiRank);
    MPI_Get_processor_name(procName, &procNameLen);

    cout << "Process: " << mpiRank << " of " << mpiCommSize << " name: " << procName << endl;

    MPI_Finalize();
    return 0;
}
 mpiCC -o /mirror/mhw mhw.cpp
 cd /mirror
 mpirun -H omm,om1,om2 mhw

Setup Your Environment

It is convenient to customize .bashrc to setup your environment:

# .bashrc
#
# ....something already here....
#

# enable MPI
#
source /usr/share/Modules/init/bash
module load mpi/openmpi-x86_64

Tip: If you want to have full Linux GUI on master node then freeNX client can be a good choice and Eclipse or Netbeans are excellent IDE for c++ development.

Build and Run OpenM++

Check out and compile OpenM++:

git clone https://github.com/openmpp/main.git master
cd master/openm/
make OM_MSG_USE=MPI
cd ../models/
make OM_MSG_USE=MPI all publish run

Copy build results to /mirror shared directory:

cp bin/* /mirror

Run the models on cluster with different number of subsamples:

 cd /mirror
 mpirun -H omm,om1,om2 -n 4 modelOne -General.Subsamples 4

you will be prompted for mpi user password, expected output is similar to:

2013-10-24 12:38:41.0360 Model: modelOne
2013-10-24 12:38:41.0359 Model: modelOne
2013-10-24 12:38:41.0360 Model: modelOne
2013-10-24 12:38:41.0363 Model: modelOne
2013-10-24 12:38:42.0518 Subsample 1
2013-10-24 12:38:42.0518 Subsample 2
2013-10-24 12:38:42.0520 Subsample 3
2013-10-24 12:38:43.0035 Subsample 0
2013-10-24 12:38:43.0062 Reading Parameters
2013-10-24 12:38:43.0062 Reading Parameters
2013-10-24 12:38:43.0062 Reading Parameters
2013-10-24 12:38:43.0063 Reading Parameters
2013-10-24 12:38:43.0066 Running Simulation
2013-10-24 12:38:43.0066 Writing Output Tables
2013-10-24 12:38:43.0066 Running Simulation
2013-10-24 12:38:43.0066 Writing Output Tables
2013-10-24 12:38:43.0066 Running Simulation
2013-10-24 12:38:43.0066 Writing Output Tables
2013-10-24 12:38:43.0066 Running Simulation
2013-10-24 12:38:43.0066 Writing Output Tables
2013-10-24 12:38:44.0198 Done.
2013-10-24 12:38:44.0198 Done.
2013-10-24 12:38:44.0198 Done.
2013-10-24 12:38:44.0200 Done.

Home

Getting Started

Model development in OpenM++

Using OpenM++

Model Development Topics

OpenM++ web-service: API and cloud setup

Using OpenM++ from Python and R

Docker

OpenM++ Development

OpenM++ Design, Roadmap and Status

OpenM++ web-service API

GET Model Metadata

GET Model Extras

GET Model Run results metadata

GET Model Workset metadata: set of input parameters

Read Parameters, Output Tables or Microdata values

GET Parameters, Output Tables or Microdata values

GET Parameters, Output Tables or Microdata as CSV

GET Modeling Task metadata and task run history

Update Model Profile: set of key-value options

Update Model Workset: set of input parameters

Update Model Runs

Update Modeling Tasks

Run Models: run models and monitor progress

Download model, model run results or input parameters

Upload model runs or worksets (input scenarios)

User: manage user settings and data

Model run jobs and service state

Administrative: manage web-service state

Clone this wiki locally