High-Performance Computing at the NIH
GitHub YouTube @nih_hpc RSS Feed
NIH HPC Systems Policies

Who May Use the NIH HPC Systems

Accounts on the NIH HPC systems are for the use of researchers in the NIH intramural research programs.

NIH Volunteers can maintain Helix/Biowulf accounts for the duration of their NIH status, but will get access to fewer Helix Systems resources than NIH employees and contractors.

NIH HPC users must be listed in the NIH NED. When a user is removed from the NED, the associated Helix and Biowulf accounts become inactive. If the user remains out of the NED for more than 14 days, the associated Helix and Biowulf accounts are deleted. Any data associated with those accounts will be deleted six months after that unless arrangements are made by the user or PI to transfer the data to another account or move it off the system.

User Responsibilities

The NIH HPC Systems are for appropriate government use only

System resources are for the work-related use of authorized users only.

Account Sharing

Account sharing among multiple users is strictly prohibited. By NIH Access Control policy, a separate Helix account must be set up for each user.

E-Mail and Internet Use

The NIH Office of Information Resources Management considers chain letters, joke messages, and ads to be inappropriate activities, subject to disciplinary action, according to Appropriate Use of E-Mail and Internet Services.

Auto-Forward E-Mail Only to NIH Addresses

Per HHS Policy (HHS Usage of Unauthorized External Information Systems to Conduct Department Business Memorandum, Jan 8, 2014), forwarding of NIH mail to external addresses is no longer permitted.

Access to data and applications is restricted

Even if a user has inadvertently allowed access to their files, do not access files or directories belonging to another user without explicit permission.

Data Recovery

User data directories and shared data directories are NOT backed up to tape (with the exception of directories that are part of a storage buy-in agreement). If you accidentally delete files you can often recover them from daily or weekly snapshots maintained on the system. HOWEVER, any data that you consider irreplaceable should be saved to your local disk storage in case of a catastrophic event on a Biowulf file system. We have more information on our Backups/snapshots web page.

PII & PHI data

NIH HPC users are forbidden from transmitting or storing any Personally Identifiable Information (PII, e.g. patient data containing names or social security numbers) or Protected Health Information (PHI) data anywhere on the NIH HPC systems, including their /home, /data, and any group (shared) /data directories.

Controlled access data such as dbGaP data can be stored on the systems, but it is the responsibility of the user to fulfill all requirements of the agreement with the data provider. (See here and here for dbGaP requirements, for example)

Read the announcements!

Users are responsible for reading the system messages and announcements. These will appear as messages during login, and will also be sent to all Helix users by email. [Archive of NIH HPC messages]

Monthly Reboots

To improve system security and availability, a monthly maintenance cycle has been instituted. This cycle will generally involve a reboot of both Helix and the Biowulf login node (not the entire cluster). The reboots are scheduled at 7 am on the first Monday of every month, or the following Tuesday if that Monday is a holiday. Downtime during a reboot will typically be 10-15 minutes.

Scheduled maintenance that requires a longer downtime and emergency maintenance will be announced separately. Every effort will be made to minimize disruptions.

See the System Status Calendar for the reboot and downtime schedule.

Helix Usage

Helix is a single shared system with 64 hyperthreaded cores (128 CPUs) and 1 TB of memory. It is intended for interactive use and for relatively short jobs. All compute-intensive jobs should be performed on the Biowulf cluster, which is intended for large-scale computing.

Length of job: Since Helix is rebooted once a month, a process can run for a max of 30 days.

Number of cores: Each user should use a max of 8 simultaneous cores (16 CPUs). If a user is using more than 8 cores and this is impacting the system, we'll ask the user to reduce the number of processes. If necessary, we may need to terminate the process.

Memory: max of 100 GB per user. If you expect to need more than this, please contact the Helix staff. We will allow processes to continue as much as possible, but if a large-memory job is impacting the system, we may need to contact the user and terminate it.

I/O intensive jobs: The I/O load of a program may be tricky to determine in advance. In some cases users may be unaware that their program has a massive I/O load. We monitor loads on Helix continuously, and in some cases we may have to contact the user or kill a process to keep the system stable.

Please contact the Helix staff (staff@hpc.nih.gov, or 301-496-4825) if you have questions about the appropriateness of your job for a particular platform, or need more information about how to run your job.