Skip to content

Hyak: Node Types on Hyak

seanb80 edited this page Jun 15, 2017 · 2 revisions

There are 4 main types of Hyak nodes. Login, Build, Interactive, and Execute.

Each has a different function and different levels of connectivity.

We're guaranteed one node allocation at any given time, some nodes count towards this allocation, some do not.

Login Node- The first node you encounter upon logging in. For file transfers and manipulation.

  • Shell prompt looks like [UNetID@mox2 ~]$
  • This node has internet connectivity.
  • No required command to enter this node type.
  • Not for program compiling or other time/compute power intensive tasks.
  • Does not count towards node allocation limit.

Build Node- For downloading and compiling software from external sources

  • Shell prompt looks like [UNetID@nXXXX ~]$
  • This node has internet connectivity
  • Command to enter build node from Login node prompt srun -p build --time=h:mm:ss --pty /bin/bash
  • Counts towards node allocation limit.
  • Not for compute power intensive tasks.
  • There are a finite number of build nodes, so even if you have node allocation available, there may be a wait.

Interactive Node- For testing, short run/low power tasks, and experimentation

  • Shell prompt looks like [UNetID@nXXXX ~]$
  • This node does not have internet connectivity.
  • Command to enter interactive node from Login node prompt srun -p srlab -A srlab --time=h:mm:ss --pty /bin/bash
  • Counts towards node allocation limit.
  • Not for compute power or time intensive tasks. Has file size/number limits.

Execute Node- For execution of large tasks. The "heavy lifting" node.

  • No shell prompt, only accessed via sshing in to the node after execution of sbatch. Node number obtained from running squeue -p srlab from a login node.
  • This node does not have internet connectivity
  • Allocated via sbatch -p srlab -A srlab myscript.slurm
  • Counts towards node allocation limit
  • Unlike other node types, you don't "interact" with the build node.
  • Creates slurm-job#.out files in working directory specified in slurm execution script. This contains all standard out output from the program. This can be monitored via cat or tail from a login node.
  • top and other task manager functions can only be accessed after sshing in to the node.