Deterministic Annealing Multidimensional Scaling (DA-MDS) is a high performance implementation of the WDA-SMACOF algorithm.
- Million Sequence Clustering at http://salsahpc.indiana.edu/millionseq/
- The Fungi Phylogenetic Project at http://salsafungiphy.blogspot.com/
- Operating System
- SPDIAL is extensively tested and known to work on,
- Red Hat Enterprise Linux Server release 6.7 (Santiago)
- Red Hat Enterprise Linux Server release 5.10 (Tikanga)
- Ubuntu 12.04.3 LTS
- Ubuntu 12.10
- This may work in Windows systems depending on the ability to setup OpenMPI properly, however, this has not been tested and we recommend choosing a Linux based operating system instead.
- Download Oracle JDK 8 from http://www.oracle.com/technetwork/java/javase/downloads/index.html
- Extract the archive to a folder named
- Set the following environment variables.
export JAVA_HOME PATH
- Apache Maven
- Download latest Maven release from http://maven.apache.org/download.cgi
- Extract it to some folder and set the following environment variables.
export MVN_HOME PATH
We recommend using
OpenMPI 1.10.1although it work with the previous 1.8 versions. Note, if using a version other than 1.10.1 please remember to set Maven dependency appropriately in the
Download OpenMPI 1.10.1 from http://www.open-mpi.org/software/ompi/v1.10/downloads/openmpi-1.10.1.tar.gz
Extract the archive to a folder named
Also create a directory named
buildin some location. We will use this to install OpenMPI
Set the following environment variables
export BUILD OMPI_1101 PATH LD_LIBRARY_PATH
- The instructions to build OpenMPI depend on the platform. Therefore, we highly recommend looking into the
$OMPI_1101/INSTALLfile. Platform specific build files are available in
- In general, please specify
--enable-mpi-javaas arguments to
configurescript. If Infiniband is available (highly recommended) specify
--with-verbs=<path-to-verbs-installation>. Usually, the path to verbs installation is
/usr. In summary, the following commands will build OpenMPI for a Linux system.
./configure --prefix=$BUILD --enable-mpi-java
- If everything goes well
mpirun --versionwill show
mpirun (Open MPI) 1.10.1. Execute the following command to instal
$OMPI_1101/ompi/mpi/java/java/mpi.jaras a Maven artifact.
mvn install:install-file -DcreateChecksum=true -Dpackaging=jar -Dfile=$OMPI_1101/ompi/mpi/java/java/mpi.jar -DgroupId=ompi -DartifactId=ompijavabinding -Dversion=1.10.1
- Few examples are available in
$OMPI_1101/examples. Please use
mpijavacwith other parameters similar to
javaccommand to compile OpenMPI Java programs. Once compiled
mpirun [options] java -cp <classpath> class-name argumentscommand with proper values set as arguments will run the MPI Java program.
- Check all prerequisites are satisfied before building DA-MDS
- Clone this git repository from
firstname.lastname@example.org:DSC-SPIDAL/damds.gitLet's call this directory
- Once above two steps are completed, building DA-MDS requires only one command,
mvn install, issued within
Note : If you have not built https://github.com/DSC-SPIDAL/common library locally please follow the following instructions
Please follow the following instructions to build this project with maven This is needed because of an SSL certificate issue with a dependency maven repo
execute the following commands from the root directory of the repo
keytool -import -file ./resources/ricecert/cs.rice.edu.cer -keystore /tmp/riceKeyStore
You can change the name of the key store and the path to it if you prefer to. This command will first ask for a password, provide any password of your choosing with at least 6 characters then it will show the following
Trust this certificate? [no]:
type "y" and then press enter. Now the cert has been properly installed. Next use the following command to compile the code
mvn -Djavax.net.ssl.trustStore=/tmp/riceKeyStore clean install
The following shell script may be used with necessary modifications to run the program.
# Java classpath. This should include paths to damds dependent jar files and the damds-1.0.jar
# The dependent jar files may be obtained by running mvn dependency:build-classpath command within damdshome
# Obtain working directory
# Character x as a variable
# A text file listing available nodes
# Number of nodes
# Number of cores per node
# Options for Java runtime
# Number of threads to use within one dapwc process
# Number of processes per node
# Total parallelism expressed as a pattern TxPxN
# where T is number of threads per process, P is processes per node, and N is total nodes
# Number of computing units assigned per process, assuming $cpn is divisible by $ppn
# Directory to memory map. Ideally, set this to where tmpfs is in Linux
echo "Running $pat on `date`" >> status.txt
# Invoke MPI to run dapwc
mpirun --report-bindings --mca btl ^tcp --hostfile $hostfile --map-by ppr:$ppn:node:PE=$bw --rank-by core -np $(($nodes*$ppn)) java $jopts -cp $cp edu.indiana.soic.spidal.damds.Program -c config$pat.properties -n $nodes -t $tpn -mmaps $mmaps -mmapdir $mmapdir | tee $pat/mds-out.txt
echo "Finished $pat on `date`" >> status.txt
The arguments listed in the
mpirun command fall into three categories.
- OpenMPI Runtime Parameters
--report-bindingsrequests OpenMPI runtime to output how processes are mapped to processing elements (cores) in the allocated nodes.
--mca btl ^tcpinstructs to disable tcp, which is useful when running on Infiniband.
--hostfileindicates the file listing available nodes. Each node has to be a in a separate line.
--map-by ppr:$ppn:node:PE=$bwcontrols process mapping and binding. This is a topic on its own right, but the specific values in this example requests processes to be mapped by node while binding each to
bwnumber of processing elements. A good set of slides on this topic is available at http://www.slideshare.net/jsquyres/open-mpi-explorations-in-process-affinity-eurompi13-presentation
-np $(($nodes*$ppn))determines the total number of processes to run and in this case it is equal to
- Java Runtime Parameters
$joptsin this case lists initial and maximum heap sizes for a JVM instance.
-cpindicates paths to find required classes where each entry is separated by a
- Program (dapwc) Parameters
-cpoints to the configuration file. This is a Java properties files listing values for each parameter that damds requires. Details on these parameters will follow in a later section.
-nindicates the total number of nodes
-tdenotes the number of threads to use within one instance of damds
-mmapsis the number of memory maps to use. Set this to 1 for the best performance
-mmapdirpoints to the directory where memory maps should be created. Ideally, it should point to
tmpfsdirectory in Linux
The following table summarizes the parameters used in dapwc.
|Path of the pairwise distance file.
|Path of the pairwise weight matrix file or the simple linear weights text file.
|Path of the points' labels file.
|Path of the initial points file.
|Path of the output points file.
|Path of the output timing information.
|Path of the output summary file.
|Total number of data points.
|Target dimension of the output points.
|Raise distances to the power of DistanceTransform.
|The minimum temperature factor.
|The maximum number of stress loops to run.
|The maximum number of conjugate gradient loops
|The flag to determine if sammon distances should be used
|The block size to use in block matrix multiplication
|The flat to indicate the endianness of the binary distance file.
|The flag to indicate if memory mapped files should be used to load data.
|The path of the jar file containing additional distance transformations.
|The path of the jar file containing additional weight transformations.
|The number of repetitions (see below).
|The maximum number of temperature loops (see below).
|The flag to indicate if weights are read from a simple linear file.
Repetitions is a quick way to test large data sizes using a smaller original
distances and weights files. For example with a NxN data set and a
DA-MDS will do a 2Nx2N run. It does so by tiling the NxN matrix 4 times
(2 horizontally and 2 vertically).
MaxTempLoops allows to test the program for performance without running for
full convergence by allowing it to run only the specified number of temperature loops.
Setting this to
0 will disable it and will do the full run.
We like to express our sincere gratitude to Prof. Vivek Sarkar and his team at Rice University for giving us access and continuous support for the HJ library. We are also thankful to FutureSystems project and its support team for their support with HPC systems. Also, we thank Intel for their support of the Juliet cluster system that we used to test DA-MDS. Last but not least OpenMPI community deserves equal recognition for their valuable support.
We also like to thank the following companies for providing us Open Source licences for their profiler software.
- ej-technologies the creator of JProfiler (http://www.ej-technologies.com/products/jprofiler/overview.html)