NIH HPC News & Announcements
Infiniband-connected nodes on Biowulf2
Date: 10 September 2015 08:09:41
From: steven fellini
The Biowulf1 "ib" nodes have been transitioned to Biowulf2,
so now there are now two types of Infiniband-connected nodes
available for general use on Biowulf2. These nodes should be
allocated _only_ for programs which have compiled to run on
Infiniband networks.
To allocate these nodes specify the appropriate Partition switches
to sbatch.
FDR (56 Gb/s), 32 cpus per node, 64 GB memory: --partition=ibfdr
DDR (16 Gb/s), 8 cpus per node, 8 GB memory: --partition=ibddr
(Note: The DDR nodes do _not_ have hyperthreading turned on,
however you can continue to use --ntasks-per-core=1 and slurm
will assign 8 compute threads per node).
You can use the 'freen' command to check availability of these
node types.
[Last 12 months of HPC announcements]