NIH HPC News & Announcements
Additional GPUs now available on NIH Biowulf Cluster
Date: 20 February 2018 13:02:50
From: steven fellini
Biowulf Users,
The NIH HPC staff is pleased to announce the addition of 48 new compute nodes, each of which is configured with 4 NVIDIA P100 GPUs. For details on the P100 see images.nvidia.com/content/tesla/pdf/nvidia-tesla-p100-PCIe-datasheet.pdf; for details on the Biowulf P100 node configuration see hpc.nih.gov/systems/.
To allocate P100 nodes, use the gres option to sbatch or swarm with a resource type of "p100" and a resource count of 1 through 4, for example...
sbatch --partition=gpu --gres=gpu:p100:N yourbatchscript.sh
where N = 1,2,3 or 4
Additionally, the gpu per user limit has been raised from 16 to 48. As always, you can check current batch system limits with the 'batchlim' command.
########################################################################
Please contact staff@hpc.nih.gov with any questions about the NIH HPC Systems
[Last 12 months of HPC announcements]