Transitioning from PBS to Slurm

The original Biowulf cluster ran the PBS batch system. The batch system on Biowulf2 is Slurm. This page contains information to help users make the transition from PBS to Slurm.

In general, a PBS batch script is a bash or csh script that will work in Slurm. Slurm will attempt to convert PBS directives appropriately. In many cases, you may not need to change your existing PBS batch scripts to work with Slurm. This is fine for scripts that have simple PBS directives, e.g. #PBS -m be. Note that PBS Environment variables (e.g. cd $PBS_O_WORKDIR) will not be converted by Slurm. For anything more complicated, you should rewrite your batch scrips in Slurm syntax. Batch scripts for parallel jobs in particular should be rewritten for Slurm.

For users migrating from Biowulf1 to Biowulf2

There are a few important differences between Biowulf1 and Biowulf2.

PBS on Biowulf Slurm on Biowulf
Command Allocation Command Allocation
qsub -l nodes=1 jobscript 1 node, with min. 2 CPUs and 1 GB memory. Exclusive. sbatch jobscript 2 CPUs, 4 GB memory on a shared node.
qsub -l nodes=1:g8 1 node, 8 GB memory. Exclusive. sbatch --mem=8g 2 CPUs, 8 GB memory on a shared node.
qsub -l nodes=1,mem=200g,ncpus=4 4 CPUs and 200 GB memory on a shared node sbatch --mem=8g --cpus-per-task=4 4 CPUs, 200 GB memory on a shared node
qsub -l nodes=1:c16:g24 1 node, 16 CPUs, 24 GB memory. Exclusive sbatch --mem=24g --cpus-per-task=16 16 CPUs, 24 GB memory on a shared node.
qsub -I -l nodes=1 interative job, 1 node with min 2 CPUs. Exclusive. sinteractive --mem=Mg --cpus-per-task=C interactive job with 4 CPUs and M GB of memory on a shared node.

Equivalent commands in PBS and Slurm

Purpose PBS Slurm
Submit a job qsub jobscriptsbatch jobscript
Delete a job qdel job_id scancel job_id
Delete all jobs belonging to user qdel `qselect -u user` scancel -u user
Job status qstat -u user squeue -u user
or use the Biowulf2 alias 'sjobs'
Show all jobs qstat -a squeue

Environment variables
Job ID $PBS_JOBID $SLURM_JOBID
Submit directory $PBS_O_WORKDIR $SLURM_SUBMIT_DIR
Allocated node list $PBS_NODEFILE $SLURM_JOB_NODELIST
Job array index $PBS_ARRAY_INDEX $SLURM_ARRAY_TASK_ID
Number of cores/processes $SLURM_CPUS_PER_TASK
$SLURM_NTASKS

Job specifications
Set a wallclock limit qsub -l nodes=1,walltime=HH:MM:SS sbatch -t [min] OR -t [days-hh:mm:ss]
Standard output file qsub -o filename
#PBS -o filename
sbatch -o filename
#SBATCH --output filename
#SBATCH -o filename
Standard errror file qsub -e filename
#PBS -e filename
sbatch -e filename
#SBATCH --error filename
#SBATCH -e filename
Combine stdout/stderr qsub -j oe
#PBS -j oe
This is the default.
Location of out/err files qsub -k oe
#PBS -k oe
not needed. By default, slurm will write
stdout/stderr files to the directory from which
the job is submitted.
Export environment to allocated node qsub -V sbatch --export=all (default)
Export a single variable qsub -v np=12 sbatch --export=np
Email notifications qsub -m be
#PBS -m be
sbatch --mail-type=BEGIN|END|FAIL|ALL
#SBATCH --mail-type=ALL
Job name qsub -N jobname -l nodes=1 jobscript
#PBS -N JobName
sbatch --job-name=name jobscript
#SBATCH --job-name=JobName
Job restart qsub -r [y|n] sbatch --requeue OR --no-requeue
Working directory - sbatch --workdir=[dirname]
Memory requirement qsub -l nodes=1:g8
qsub -l nodes=1,mem=256gb
sbatch --mem=8g
sbatch --mem=256g
Job dependency qsub -W depend=afterany:jobid sbatch --depend=afterany:jobid
Job Blocking qsub -W block=true---- no equivalent ------------------
Job arrays qsub -J 1-100 jobscript sbatch --array=1-100 jobscript
Licenses qsub -l nodes=1,matlab=1 sbatch --licenses=matlab????????????

Converting a PBS batch script to a Slurm batch script

Defaults:

Slurm will, by default, attempt to understand all PBS options in the batch script. For example, a batch script containing

#PBS -N JobName
will be internally translated by Slurm into
#SBATCH --job-name=JobName
and the job will show up in in the squeue output with the job name 'JobName'.

Thus, most of your old Biowulf batch scripts should work in Slurm without problems. For new batch scripts, we recommend that you start using the SLURM options.

Ignore PBS directives:

If you do not want the PBS directives in your batch script to be internally translated by Slurm, use the --ignore-pbs option to Slurm. For example, submitting with:

[biowulf ~]$ sbatch --ignore-pbs  jobscript
will cause Slurm to ignore all #PBS directives in the batch script.

pbs2slurm:

A script called pbs2slurm can be used to convert your existing PBS batch scripts to Slurm scripts.

Sample session.

[biowulf2 ~]$ pbs2slurm < run1.pbs > run1.slurm

run1.pbs
#!/bin/csh -v
#PBS -N germline
#PBS -m be
#PBS -k oe

cd $PBS_O_WORKDIR
germline -bits 50 -min_m 1 -err_hom 2  <<EOF
1
CEU.22.map
CEU.22.ped
generated
EOF

run1.slurm
#!/bin/csh -v
#SBATCH --job-name="germline"
#SBATCH --mail-type=BEGIN,END


cd $SLURM_SUBMIT_DIR
germline -bits 50 -min_m 1 -err_hom 2 <<EOF
1
CEU.22.map
CEU.22.ped
generated
EOF

Note that the directive #PBS -k oe is not translated. This directive is unnecessary in Slurm, so there is no equivalent. Slurm defaults to writing a single stderr/stdout file to the directory from which the job was submitted. (This Slurm behaviour can be changed with the #SBATCH -o filename and #SBATCH -e filename flags).

The webpage on the pbs2slurm tool has more details and many examples.