Skip to content

Commit

Permalink
Add files via upload
Browse files Browse the repository at this point in the history
  • Loading branch information
mtrahan41 committed Feb 1, 2019
1 parent cb4c0ea commit 2c92882
Show file tree
Hide file tree
Showing 4 changed files with 100 additions and 50 deletions.
27 changes: 15 additions & 12 deletions docs/running-jobs/batch-jobs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Batch Jobs and Job Scripting

Batch Jobs are by far the most common job on Summit. Batch jobs are resource provisions that run applications on nodes away from the user. Batch jobs are commonly used for applications that run for a long period of time or applications that require little to no user input.
Batch jobs are by far the most common type of job on Summit. Batch jobs are resource provisions that run applications on nodes away from the user and do not require supervision or interaction. Batch jobs are commonly used for applications that run for long periods of time or require little to no user input.

Batch jobs are created from a job script which provide resource requirements and commands for the job.

Expand All @@ -19,10 +19,10 @@ Even though it is possible to submit jobs completely from the command line, it i
Submitting a job script can be done with the `sbatch` command:

```bash
sbatch <your-job-script>
sbatch <your-job-script-name>
```

Because job script specify the desired resources for your job, you won't need to specify any resources on the command line. You can, however, overwrite or add any job parameter by providing the specific resource as a flag within `sbatch` command:
Because job scripts specify the desired resources for your job, you won't need to specify any resources on the command line. You can, however, overwrite or add any job parameter by providing the specific resource as a flag within `sbatch` command:

```bash
sbatch --partition=sgpu <your-job-script>
Expand Down Expand Up @@ -61,15 +61,15 @@ Normally job scripts are divided into 3 primary parts: directives, loading softw

#### 1. Directives

A directive is a comment that is included at the top of a job script that tells the shell some sort of information about the script.
A directive is a comment that is included at the top of a job script that tells the shell information about the script.

The first directive, the shebang directive, is always on the first line of any script. The directive indicates which shell you want running commands in your job. Most users will likely utilize bash as their shell, so we will specify bash by typing:
The first directive, the shebang directive, is always on the first line of any script. The directive indicates which shell you want running commands in your job. Most users employ bash as their shell, so we will specify bash by typing:

```bash
#!/bin/bash
```

The next directives that must be included with your job script are *sbatch* directives. These directives specifies resource requirements for a batch job. These directives must come after the shebang directive and before any commands are issued in the Job script. Each directive contains a flag that requests resource the job would need to complete execution. An sbatch directive is written as such:
The next directives that must be included with your job script are *sbatch* directives. These directives specify resource requirements to Slurm for a batch job. These directives must come after the shebang directive and before any commands are issued in the job script. Each directive contains a flag that requests resource the job would need to complete execution. An sbatch directive is written as such:

```bash
#SBATCH --<resource>=<amount>
Expand All @@ -81,25 +81,27 @@ For example if you wanted to request 2 nodes with an sbatch directive, you would
#SBATCH --nodes=2
```

A list of some useful sbatch directives [can be found here.]() A full list of commands [can be found on Slurm's documentation on sbatch.]()
A list of some useful sbatch directives [can be found here.]() A full list of commands [can be found in Slurm's documentation for sbatch.]()

#### 2. Software

Because jobs run on a different node than from where you submit, any shared software that is needed must be loaded in via the job script. Software can be loaded in a job script just like you would on the command line. First we will want to purge all software that may be left behind from a previous job by running the command:
Because jobs run on a different node than from where you submit, any shared software that is needed must be loaded via the job script. Software can be loaded in a job script just like it would be on the command line. First we will purge all software that may be left behind from a previous job by running the command:

```bash
module purge
```

After this you can load whatever piece of software you may need by running the following command:
After this you can load whatever software you need by running the following command:

```bash
module load <software>
```

More information about [software modules can be found here.]()

#### 3. User Scripting

The last part of a job script is the actual user scripting that will execute when the job is submitted. This part of the job script includes all user commands that are utilized in the jobs completion. Any Linux command can be utilized in this step. Scripting can range from highly complex loops iterating over thousands of files to a simple call to an executable. Below is an simple example of some user scripting:
The last part of a job script is the actual user scripting that will execute when the job is submitted. This part of the job script includes all user commands that are needed to set up and execute the desired task. Any Linux command can be utilized in this step. Scripting can range from highly complex loops iterating over thousands of files to a simple call to an executable. Below is an simple example of some user scripting:

```bash
echo "== This is the scripting step! =="
Expand Down Expand Up @@ -129,6 +131,7 @@ Job script to run a 5 minute long, 1 node, 1 core C++ Job:
#SBATCH --output=cpp-job.%j.out

module purge
module load gcc

./example_cpp.exe
```
Expand All @@ -147,6 +150,7 @@ Job script to run a 7 minute long, 1 node, 4 core C++ OpenMP Job:
#SBATCH --output=omp-cpp-job.%j.out

module purge
module load gcc

export OMP_NUM_THREADS=4

Expand Down Expand Up @@ -175,8 +179,7 @@ mpirun -np 24 ./example_mpi.exe

### Job Flags

The `sbatch` command supports many optional flags. To review all the options, please visit the Slurm [sbatch page](http://slurm.schedmd.com/sbatch.html). Below, we have listed a few ones you may want to consider when submitting your job via
`sbatch`.
The `sbatch` command supports many optional flags. To review all the options, please visit the Slurm [sbatch page](http://slurm.schedmd.com/sbatch.html). Below are a few flags you may want to consider when submitting your job via `sbatch`.

| Type | Description | Flag |
| :--------------------- | :-------------------------------------------------- | :------------------------- |
Expand Down
26 changes: 13 additions & 13 deletions docs/running-jobs/interactive-jobs-new.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,34 @@
## Interactive jobs

Interactive jobs allow a user to interact with applications on the HPC environment in real time. With interactive jobs, users can request time to work on a compute node directly. Users can then run general user interface (GUI) applications, execute scripts, or run other commands directly on a compute node. Common reasons for running interactive jobs include debugging, designing workflows, or preference in using the GUI interface of an application.
Interactive jobs allow a user to interact with applications in real time within an HPC environment. With interactive jobs, users request time and resources to work on a compute node directly. Users can then run graphical user interface (GUI) applications, execute scripts, or run other commands directly on a compute node. Common reasons for running interactive jobs include debugging, designing workflows, or preference in using the GUI interface of an application.

### General Interactive Jobs

<iframe width="560" height="315" src="https://www.youtube.com/embed/s53sjDubBpo" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>

---

To run an interactive job on Research Computing resources, request an interactive session by utilizing the `sinteractive` command. The `sinteractive` command creates a job with parameters provided through flags run with the command. The command will then put the user onto the compute node to interactively utilize their resource allotment.
To run an interactive job on Research Computing resources, request an interactive session by utilizing the `sinteractive` command. The `sinteractive` command creates a job with parameters provided through flags run with the command. After moving through the Slurm queue the interactive job will put the user onto the command line of a compute node to interactively use their resource allotment.

Any resource that could be specified in a job script or with `sbatch` can be utilized with `sinteractive`. The primary flags we recommend users specify are the `qos` flag and the `time` flag. These flags will specify quality of service and amount of time for your job for respectively. We run the `sinteractive` command as such:
Any resource that could be specified in a job script or with `sbatch` can also be used with `sinteractive`. The primary flags we recommend users specify are the `qos` flag and the `time` flag. These flags will specify quality of service and amount of time for your job respectively. The `sinteractive` command is run as follows:

```bash
sinteractive --qos=interactive --time=00:10:00
```

This will load an interactive session that will run on one core of one node on the interactive quality of service for ten minutes. From here you can run any application or script you may need. For example, if you type `python` you will open an interactive python shell on a compute node (rather than the login nodes, which is forbidden).
This will submit an interactive job to the Slurm queue that will start a terminal session that will run on one core of one node with the interactive quality of service for ten minutes. Once the session has started you can run any application or script you may need from the command line. For example, if you type `python` you will open an interactive python shell on a compute node (rather than the login nodes, which is forbidden).

### Interactive GUI Applications

<iframe width="560" height="315" src="https://www.youtube.com/embed/DFnHsMxPC5w" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>

---

To run an interactive GUI application, we must install a X windows server application and enable X11 forwarding on our personal computer.
To run an interactive GUI application on Summit, we must install an X windows server application and enable X11 forwarding on our personal computer.

#### Windows setup

On Windows we must first install an X window server application to allow Summit to forward the GUI information to your local system. For Windows, we will use an application called Xming to accomplish
On Windows we must first install an X windows server application to allow Summit to forward the GUI information to your local system. For Windows, we will use an application called Xming to accomplish
this. [Download the Xming here](https://sourceforge.net/projects/xming/).

Next we must enable X11 forwarding on the PuTTY application. Download and install the [PuTTY application](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) here if you have not done so already.
Expand All @@ -43,25 +43,25 @@ In the X11 Menu check the "Enable X11 Forwarding" checkbox and type "localhost:0

#### macOS setup

Using macOS, we will also need to install an X-window server application to allow Summit to forward GUI information to your local system. For Mac, we will use an application called XQuartz to accomplish this. [Download and install XQuartz here](https://www.xquartz.org/).
Using macOS, we will also need to install an X windows server application to allow Summit to forward GUI information to your local system. For Mac, we will use an application called XQuartz to accomplish this. [Download and install XQuartz here](https://www.xquartz.org/).

Opening the application will bring up a terminal window. On this window type:
Opening the application will bring up a terminal window. In this window, you will ssh to login.rc.colorado.edu as you normally would except you'll include the "-X" flag:

```bash
ssh login.rc.colorado.edu -l your_rc-username -X
ssh -X your_rc-username@login.rc.colorado.edu
```

#### Running GUI Applications

Once you have logged into Summit with X11 Forwarding enabled, a user should be able to initialize a GUI application by starting an interactive job and running your selected application. The X-window server application installed on your local system will display the window generated on the cluster on your local machine.
Once you have logged into Summit with X11 Forwarding enabled, you will be able to initialize a GUI application by starting an interactive job and running your selected application. The X-window server application installed on your local system will display the window generated on the cluster on your local machine.

If you plan on submitting your interactive job from a compile node for whatever reason, you must connect connect to the scompile node with x11 forwarding enabled:
If you plan on submitting your interactive job from a compile node, you must also enable x11 forwarding when you ssh into scompile:

```bash
ssh scompile -X
ssh -X scompile
```

From here you will be able to submit your interactive job like normal.
From here you will be able to submit your interactive job like normal and x11 forwarding will carry through to the job.



0 comments on commit 2c92882

Please sign in to comment.