The original Biowulf cluster ran the PBS batch system. The batch system on Biowulf2 is Slurm. This page contains information to help users make the transition from PBS to Slurm.
In general, a PBS batch script is a bash or csh script that will work in Slurm. Slurm will attempt to convert PBS directives appropriately. In many cases, you may not need to change your existing PBS batch scripts to work with Slurm. This is fine for scripts that have simple PBS directives, e.g. #PBS -m be. Note that PBS Environment variables (e.g. cd $PBS_O_WORKDIR) will not be converted by Slurm. For anything more complicated, you should rewrite your batch scrips in Slurm syntax. Batch scripts for parallel jobs in particular should be rewritten for Slurm.
There are a few important differences between Biowulf1 and Biowulf2.
sbatch --time=16:00:00 jobscriptwill set a walltime of 16 hrs. Type batchlim to see the current walltime limits on partitions, or see the System Status page.
PBS on Biowulf | Slurm on Biowulf | ||
Command | Allocation | Command | Allocation |
qsub -l nodes=1 jobscript | 1 node, with min. 2 CPUs and 1 GB memory. Exclusive. | sbatch jobscript | 2 CPUs, 4 GB memory on a shared node. |
qsub -l nodes=1:g8 | 1 node, 8 GB memory. Exclusive. | sbatch --mem=8g | 2 CPUs, 8 GB memory on a shared node. |
qsub -l nodes=1,mem=200g,ncpus=4 | 4 CPUs and 200 GB memory on a shared node | sbatch --mem=8g --cpus-per-task=4 | 4 CPUs, 200 GB memory on a shared node |
qsub -l nodes=1:c16:g24 | 1 node, 16 CPUs, 24 GB memory. Exclusive | sbatch --mem=24g --cpus-per-task=16 | 16 CPUs, 24 GB memory on a shared node. |
qsub -I -l nodes=1 | interative job, 1 node with min 2 CPUs. Exclusive. | sinteractive --mem=Mg --cpus-per-task=C | interactive job with 4 CPUs and M GB of memory on a shared node. |
Purpose | PBS | Slurm | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Submit a job | qsub jobscript | sbatch jobscript | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Delete a job | qdel job_id | scancel job_id | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Delete all jobs belonging to user | qdel `qselect -u user` | scancel -u user | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Job status | qstat -u user | squeue -u user or use the Biowulf2 alias 'sjobs' | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Show all jobs | qstat -a | squeue | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Environment variables Job ID | $PBS_JOBID | $SLURM_JOBID
| Submit directory | $PBS_O_WORKDIR | $SLURM_SUBMIT_DIR
| Allocated node list | $PBS_NODEFILE | $SLURM_JOB_NODELIST
| Job array index | $PBS_ARRAY_INDEX | $SLURM_ARRAY_TASK_ID
| Number of cores/processes | | $SLURM_CPUS_PER_TASK | $SLURM_NTASKS Job specifications Set a wallclock limit | qsub -l nodes=1,walltime=HH:MM:SS | sbatch -t [min] OR -t [days-hh:mm:ss]
| Standard output file | qsub -o filename | #PBS -o filename sbatch -o filename | #SBATCH --output filename #SBATCH -o filename Standard errror file | qsub -e filename | #PBS -e filename sbatch -e filename | #SBATCH --error filename #SBATCH -e filename Combine stdout/stderr | qsub -j oe | #PBS -j oe This is the default.
| Location of out/err files | qsub -k oe | #PBS -k oe not needed. By default, slurm will write | stdout/stderr files to the directory from which the job is submitted. Export environment to allocated node | qsub -V | sbatch --export=all (default)
| Export a single variable | qsub -v np=12 | sbatch --export=np
| Email notifications | qsub -m be | #PBS -m be sbatch --mail-type=BEGIN|END|FAIL|ALL | #SBATCH --mail-type=ALL Job name | qsub -N jobname -l nodes=1 jobscript | #PBS -N JobName sbatch --job-name=name jobscript | #SBATCH --job-name=JobName Job restart | qsub -r [y|n] | sbatch --requeue OR --no-requeue
| Working directory | - | sbatch --workdir=[dirname]
| Memory requirement | qsub -l nodes=1:g8 | qsub -l nodes=1,mem=256gb sbatch --mem=8g | sbatch --mem=256g Job dependency | qsub -W depend=afterany:jobid | sbatch --depend=afterany:jobid
| Job Blocking | qsub -W block=true | ---- no equivalent ------------------
| Job arrays | qsub -J 1-100 jobscript | sbatch --array=1-100 jobscript
| Licenses | qsub -l nodes=1,matlab=1 | sbatch --licenses=matlab????????????
| |
Defaults:
Slurm will, by default, attempt to understand all PBS options in the batch script. For example, a batch script containing
#PBS -N JobNamewill be internally translated by Slurm into
#SBATCH --job-name=JobNameand the job will show up in in the squeue output with the job name 'JobName'.
Thus, most of your old Biowulf batch scripts should work in Slurm without problems. For new batch scripts, we recommend that you start using the SLURM options.
Ignore PBS directives:
If you do not want the PBS directives in your batch script to be internally translated by Slurm, use the --ignore-pbs option to Slurm. For example, submitting with:
[biowulf ~]$ sbatch --ignore-pbs jobscriptwill cause Slurm to ignore all #PBS directives in the batch script.