NIH HPC News & Announcements
Changes coming to Biowulf Batch System
Date: 18 July 2016 09:07:51
From: steven fellini
Biowulf Users,
In August, as part of the next expansion of the NIH Biowulf Cluster,
we will make a number of changes to the SLURM batch scheduler (we will
announce the exact date for the changes at a later time). These
changes will allow the expanded system to more efficiently schedule
jobs and are based on our experience to date with SLURM as well as
users' input.
The changes include one new partition and the elimination of three, as
well as revised default and maximum timelimits (DefWalltime,
MaxWalltime):
norm partition (default).
-------------------------
DefWalltime reduced from 4 hours to 2 hours,
MaxWalltime remains 10 days. New: only jobs running on a single node can
be submitted to the norm partition.
multinode partition. New.
-------------------------
This partition is intended for parallel jobs that require 2 or more nodes;
single-node jobs will not be allowed to run on this partition.
DefWalltime is 8 hours; MaxWalltime 10 days. All nodes in the multinode
partition will be connected to an FDR Infiniband network.
Users with short (< 8 hrs walltime) multinode jobs can also take advantage of
the new 'turbo' QoS (Quality of Service). This QoS will have a substantially
increased MaxCPUSPerUser and Priority. Add '--qos=turbo' to your sbatch
command to use this QoS.
b1 partition. Eliminated.
-------------------------
The b1 nodes will be merged into the quick partition.
ibfdr, ibqdr partitions. Eliminated.
------------------------------------
ibfdr nodes will be merged into the multinode partition. ibqdr nodes will be
merged into the quick partition (without IB connectivity).
quick partition.
----------------
DefWalltime reduced from 2 hours to 1 hour, MaxWalltime remains 2 hours.
interactive, largemem, unlimited, gpu, ccr, ccrclin, niddk, nimh partitions.
----------------------------------------------------------------------------
Unchanged.
You can use the 'batchlim' command to determine the current values for
the maximum number of cpus per user for each partition.
For a summary of these changes please visit
hpc.nih.gov/docs/aug2016-changes.html
[Last 12 months of HPC announcements]