Slurm Workload Manager (Fimm): Difference between revisions

From HPC documentation portal
Line 8: Line 8:
sinfo - reports the state of partitions and nodes managed by SLURM.
sinfo - reports the state of partitions and nodes managed by SLURM.


squeue  
squeue - reports the state of jobs or job steps.


scontrol show partition
scontrol show partition


sbatch is used to submit a job script for later execution.


Man pages exist for all Slurm daemons, commands, and API functions. The command option --help also provides a brief summary of options. Note that the command options are all case sensitive.
scancel is used to cancel a pending or running job or job step
 
srun is used to submit a job for execution or initiate job steps in real time


==batch script==
==batch script==

Revision as of 13:07, 3 November 2015

Overview

Slurm is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.

Commands

sinfo - reports the state of partitions and nodes managed by SLURM.

squeue - reports the state of jobs or job steps.

scontrol show partition

sbatch is used to submit a job script for later execution.

scancel is used to cancel a pending or running job or job step

srun is used to submit a job for execution or initiate job steps in real time

batch script

#!/bin/bash
#CPU accounting is not enforced currently.
#SBATCH -A <account> 
#SBATCH -n 1
#SBATCH --ntasks-per-node=2
#SBATCH --time=00:30:00
srun ./my_program

MPI program

#!/bin/bash
#CPU accounting is not enforced currently.
#SBATCH -A <account>
#SBATCH -N 2
#use --exclusive to get the whole nodes exclusively for this job
#SBATCH --exclusive
#SBATCH --time=01:00:00
#SBATCH -c 2
srun -n 10 ./mpi_program