About the NIH HPC Core Facility
The NIH HPC group plans, manages and supports
high-performance computing systems specifically for the
intramural NIH community. These systems include Biowulf,
an 60,000+ processor
Linux cluster; Helix, an interactive system for
short jobs, Sciware, a set of applications
for desktops, and
Helixweb, which provides a number of web-based
scientific tools. We provide access to a wide range of computational
applications for genomics, molecular and structural biology, mathematical and
graphical analysis, image analysis, and other scientific fields.
NIH HPC Facility Staff
Steven Fellini, Ph.D.
Susan Chacko, Ph.D.
Afif Elghraoui, M.S.
David Godlove, Ph.D.
David Hoover, Ph.D.
Jean Mao, Ph.D.
Sandy Orlow, Ph.D.
Wolfgang Resch, Ph.D.
||NIH Biowulf Cluster started with 40 "boxes on shelves".|
CHARMm and Blast running on cluster.
14 active users
||1st scientific paper citing Biowulf|
First batch of 16 Myrinet nodes added to cluster.
Swarm developed in-house to submit large numbers of independent jobs to the cluster.
||Blat, GAMESS, Gauss, Amber also running on cluster. |
PBS Pro batch system.
||New login node running RHEL 7.3|
80 nodes added to cluster.
||New Biowulf website|
Added 198 'p2800' nodes, including 24 nodes with 4 GB of memory.
||Added 132 dual-processor nodes 2.8 GHz Intel Xeon with 2-4 GB memory, + 32 AMD Opterons with Myrinet, + 42 AMD Opterons. |
Cluster reaches 1000+ nodes.
||Added 64 dual-processor AMD Opterons, 2.2. GHz with 4 GB memory.|
Nodes upgraded to RHEL 3.1 (Linux 2.6 kernel)
New Login node, dual-processor 3.2 GHz Xeon, 4 GB memory
100th scientific paper published citing Biowulf
||Added 324 dual-processor, 2.8 GHz AMD Opterons with 4 GB memory. |
Added 40 dual-core, 2.6 GHz AMD Opterons with 8 GB memory.
Added 64 Infiniband-connected nodes.
||Added 48 Infiniband-connected nodes. |
Per-user limit raised from 16 to 24 IB nodes.
||Helix transitions from SGI Origin 2400 to a Sun Opteron running RHEL 5. |
||"The NIH Biowulf Cluster: 10 years of Scientific Supercomputing" symposium held at NIH, Bethesda..|
Added 224 Infinband-connected nodes, 8 processor, Intel EMT-64 2.8 GHz.
Added "Very-large memory nodes", 72 GB memory each + one 512 GB memory node
Storage system added which increases capacity by 450 TB.
||Myrinet nodes decommissioned|
All Biowulf nodes now 64-bit
Added 336 Intel quad-core Nehalem nodes, 2.67 GHz, 24 GB memory.
16 Pilot GPU nodes added to cluster.
||Helix transitioned to 64-processor, 1 TB memory hardware.|
Added 328 compute nodes, 2* Intel 2.8 GHz X56660 processors, 24 GB memory.
500th scientiific paper published citing Biowulf
||Annual Biowulf account renewals implemented|
Environment modules implemented for scientific applications
||NCI and NIMH fund nodes on the cluster|
Major network upgrades increasing bandwidth between HPC systems and NIH core
24 NIH ICs, 250 PIs, 620 users
||Storage reaches 3 PB|
NCI and NIDDK fund additional nodes
1000th scientific paper published citing Biowulf
Phase 1 HPC upgrade installed
Transition to Slurm batch system
Biowulf2 goes production in July
Webservers migrated to new hardware
Dedicated data-transfer nodes installed
400+ scientific applications installed and updated
||New 'HPC account' process|
30,000 cores added
3 PB disk storage added
All new nodes on FDR Infiniband
1500th paper published based on Biowulf usage