Skip to content

KFUPM-JRCAI/slurm-users-guide

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

45 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ–₯️ SLURM Guide

Screenshot A simple guide to using SLURM (Simple Linux Utility for Resource Management) on KFUPM clusters.


Current Cluster Setup

πŸ“Š Partition Details

Partition Purpose Time Limit Nodes GPUs
Normal* Large Models 24 hours server02 6x A100
RTX3090 GPU computing 24 hours jrcai01-02 2x (2xRTX 3090)
LoginNode Access only - jrcai23 Login access

* Default partition

πŸ‘₯ Group Management

  • Advisor Groups: Each advisor has a group with their students
  • Shared Storage: Groups are hard-limited to 1 TB of shared disk space.
  • Job Limits:
    • Normal Partition (default): 1 job limit per group
    • RTX3090 Partition: Groups can submit 1 additional GPU job by specifying this partition

πŸ“– Documentation

Learn how to connect to the SLURM cluster using:

  • SSH Terminal
  • Visual Studio Code

Complete guide covering:

  • Monitoring commands
  • Job submission
  • Data transfer
  • Account management

πŸ“‹ Basic SLURM Workflow

graph TD
    A[Write Job Script] --> B[Submit with sbatch]
    B --> C[Job Queued - Status: PD]
    C --> D[Resources Available?]
    D -->|No| C
    D -->|Yes| E[Job Running - Status: R]
    E --> F[Job Complete]
    F --> G[Check Results]
Loading

Getting Help

Contact Information:

  • System Administrator: Contact JRCAI support team
  • Technical Issues: mohammed.sinan@kfupm.edu.sa
  • Account Problems: Submit ticket through proper channels

Last Updated: 16/9/2025
By: Mohammed AlSinan (mohammed.sinan@kfupm.edu.sa)

Login Node: (check your email/registration details)

About

guide for slurm users

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published