High-Performance Computing at the NIH
GitHub YouTube @nih_hpc RSS Feed
About the NIH HPC Core Facility

The NIH HPC group plans, manages and supports high-performance computing systems specifically for the intramural NIH community. These systems include Biowulf, an 60,000+ processor Linux cluster; Helix, an interactive system for short jobs, Sciware, a set of applications for desktops, and Helixweb, which provides a number of web-based scientific tools. We provide access to a wide range of computational applications for genomics, molecular and structural biology, mathematical and graphical analysis, image analysis, and other scientific fields.

NIH HPC Facility Staff


Steve Bailey

Steven Fellini, Ph.D.

Susan Chacko, Ph.D.

Afif Elghraoui, M.S.

Ainsley Gibson

David Godlove, Ph.D.

David Hoover, Ph.D.

Patsy Jones

Charles Lehr

Jean Mao, Ph.D.

Tim Miller

Sandy Orlow, Ph.D.

Charlene Osborn

Mark Patkus

Dan Reisman

Wolfgang Resch, Ph.D.

Rick Troxel

Sylvia Wilkerson

Former Staff


Jane Small

Peter FitzGerald

Ernie Jordan

Justin Nemmers

Justin Bentley

Jason Russler

Dolores Albano

Rick Horner

Ellen Gilliam

Michelle Johnson

Giovanni Torres

History

1999 NIH Biowulf Cluster started with 40 "boxes on shelves".

CHARMm and Blast running on cluster.

14 active users

2000 1st scientific paper citing Biowulf

First batch of 16 Myrinet nodes added to cluster.

Swarm developed in-house to submit large numbers of independent jobs to the cluster.

2001 Blat, GAMESS, Gauss, Amber also running on cluster.

PBS Pro batch system.

2002 New login node running RHEL 7.3

80 nodes added to cluster.

2003 New Biowulf website

Added 198 'p2800' nodes, including 24 nodes with 4 GB of memory.

Myrinet upgrade

2004 Added 132 dual-processor nodes 2.8 GHz Intel Xeon with 2-4 GB memory, + 32 AMD Opterons with Myrinet, + 42 AMD Opterons.

Cluster reaches 1000+ nodes.

Adios Telnet!

2005 Added 64 dual-processor AMD Opterons, 2.2. GHz with 4 GB memory.

Nodes upgraded to RHEL 3.1 (Linux 2.6 kernel)

New Login node, dual-processor 3.2 GHz Xeon, 4 GB memory

100th scientific paper published citing Biowulf

2006 Added 324 dual-processor, 2.8 GHz AMD Opterons with 4 GB memory.

Added 40 dual-core, 2.6 GHz AMD Opterons with 8 GB memory.

Added 64 Infiniband-connected nodes.

2007 Added 48 Infiniband-connected nodes.

Per-user limit raised from 16 to 24 IB nodes.

2008 Helix transitions from SGI Origin 2400 to a Sun Opteron running RHEL 5.

2009 "The NIH Biowulf Cluster: 10 years of Scientific Supercomputing" symposium held at NIH, Bethesda..

Added 224 Infinband-connected nodes, 8 processor, Intel EMT-64 2.8 GHz.

Added "Very-large memory nodes", 72 GB memory each + one 512 GB memory node

Storage system added which increases capacity by 450 TB.

2010 Myrinet nodes decommissioned

All Biowulf nodes now 64-bit

Added 336 Intel quad-core Nehalem nodes, 2.67 GHz, 24 GB memory.

16 Pilot GPU nodes added to cluster.

2011 Helix transitioned to 64-processor, 1 TB memory hardware.

Added 328 compute nodes, 2* Intel 2.8 GHz X56660 processors, 24 GB memory.

500th scientiific paper published citing Biowulf

2013 Annual Biowulf account renewals implemented

Environment modules implemented for scientific applications

2014 NCI and NIMH fund nodes on the cluster

Major network upgrades increasing bandwidth between HPC systems and NIH core

24 NIH ICs, 250 PIs, 620 users

2015 Storage reaches 3 PB

NCI and NIDDK fund additional nodes

1000th scientific paper published citing Biowulf

Phase 1 HPC upgrade installed

Transition to Slurm batch system

Biowulf2 goes production in July

Webservers migrated to new hardware

Dedicated data-transfer nodes installed

400+ scientific applications installed and updated

2016 New 'HPC account' process

Walk-in Consults

30,000 cores added

3 PB disk storage added

All new nodes on FDR Infiniband

1500th paper published based on Biowulf usage