This repository is dedicated to the development of an automated script that
launches the starvz
analysis script of trace files using a package-ready
Docker image.
You can obtain this image by doing the following command:
sudo docker pull hcpsilva/starvz
Remember that Docker does have all sorts of issues involving privilege escalation and such, beware!
If you haven’t downloaded the image when you eventually run the script, the script itself will pull the image.
Alternatively, you can either:
- build my image locally, check more in Building the image
- pass other image as a parameter to the script, check more details in Usage
- pass a tarball with an image as your parameter, check more details in Unprivileged mode
And you can obtain this repository and it’s scripts for running data analysis by doing:
git clone https://github.com/hcpsilva/starvz-docker.git
So, you can build my image either using Docker or using Charliecloud.
- With Docker all you have to do is the following:
cd starvz-docker/
sudo docker build --squash . -t hcpsilva/starvz
You can drop the --squash
option as it is a experimental option. I included it
as it significantly reduces the final image size.
- With Charliecloud you do the following:
cd starvz-docker/
ch-grow -s $target_dir -t hcpsilva/starvz .
This script supports the use of unprivileged containers through Charliecloud, an HPC alternative to Docker. Charliecloud in itself just runs containers of images in an unprivileged manner, avoiding the privilege escalation issues of Docker and thus making it viable in scenarios such as HPC clusters. While running in this mode you won’t be asked to run as super-user.
Here I’m hoping to give you a mini-introduction on how to use this method with my script.
To enable user mount namespaces I had to run the following command (such method allows user programs to use a safer container utilization):
sudo sysctl kernel/unprivileged_userns_clone=1
To install it you have three options:
- Install from source
git clone --recursive https://github.com/hpc/charliecloud.git
make
sudo make install PREFIX=/usr/local
- Install from your package manager
The following have the package available:
Debian Gentoo openSUSE RPM-based Nix
- Install with
spack
spack
is also a piece of software intended to be used in HPC
environments. It’s a package-manager that doesn’t need permissions to install
packages and features loads of stuff commonly used in scientific computation. I
encourage you to read about it in https://spack.io/.
Given you have it in your path, just use the following command:
spack install charliecloud
So, after installing Charliecloud you can convert your already existing Docker images to tarballs with the following command:
image_name=hcpsilva/starvz
target_directory=.
ch-builder2tar $image_name $target_directory
After that you can distribute the flattened image to your target nodes or elsewhere.
My script does have an option to send an arbitrary image tarball instead of the default, so all you have to do is pull my image and flatten it.
This repository consists of two main things:
- A Dockerfile containing a StarPU data-analysis-ready environment;
- A executable-like script that runs the
starvz
analysis in a given folder.
To run any of those please follow the instructions given in Installation.
Having that said, you can run some data analysis by using the script called
starvz
, which is in the root of this repository.
./starvz [options] <full_path_to_the_traces>
By running this command you’ll launch the first phase of starvz
’s analysis.
The possible options are as follows:
USAGE: COMMAND: ./starvz [OPTIONS] <DIRECTORY> WHERE <DIRECTORY> is the *full* path to the directory containing the fxt traces WHERE [OPTIONS] can be any of the following, in no particular order: -h | --help shows this message and exits -i | --image=image_name:tag use a remote image instead of the default hcpsilva/starvz:latest -t | --tarball=path/to/the/image.tar.gz use an image tarball instead of the default hcpsilva/starvz:latest
You can contact me at:
hcpsilva@inf.ufrgs.br