Biowulf Cluster: Transition to Rocky8/RHEL8

The Biowulf cluster compute nodes, the Biowulf login node and Helix will transition from RHEL7/CentOS7 to the RHEL8/Rocky8 operating system in June 2023.

As of 8 May 2023, application installs on the existing RHEL/CentOS 7 system are frozen. New versions of applications will only be installed on the RHEL/Rocky 8 cluster.

Getting on to a Rocky8/RHEL8 system

Please log in to the RHEL8 login node by connecting to biowulf8.nih.gov. (Note: When the entire cluster switches to Rhel8, the login node will be accessed as biowulf.nih.gov)

Note that only a subset of node types is available on the Rhel8/Rocky8 system. These can be seen with freen. If you wish to test your application on a node type that is not listed, let us know.

Interactive job

You can use ssh, NX or X11 to connect to biowulf8. As on Biowulf, to submit an interactive job, use 'sinteractive'.

[user@biowulf8 ~]$ sinteractive
salloc: Pending job allocation 61681262
salloc: job 61681262 queued and waiting for resources
salloc: job 61681262 has been allocated resources
salloc: Granted job allocation 61681262
salloc: Waiting for resource configuration
salloc: Nodes cn4338 are ready for job
[user@cn4338 ~]$

To allocate an interactive session on a GPU node:

[user@biowulf8 ~]$ sinteractive --gres=gpu:v100:1
Batch job

As on Biowulf, use 'sbatch' to submit batch jobs. Use 'freen' to see the nodes available on the Rhel8 cluster.

# simple batch job with default parameters
sbatch jobscript

# job requesting 8 cpus and 10 GB memory
sbatch --cpus-per-task=8 --mem=10g jobscript

# job requesting 4 MPI ntasks
sbatch --ntasks=4 --ntasks-per-core=1  jobscript

# job requesting 1 k80 GPU and 4 MPI tasks
sbatch --gres=gpu:k80:1 --ntasks=7 --ntasks-per-core=1  jobscript
Applications

The HPC staff has rebuilt and transitioned the majority of scientific applications onto the Centos 8 system, prioritizing applications by usage.

As of 8 May 2023, application installs on the existing RHEL/CentOS 7 system are frozen. New versions of applications will only be installed on the RHEL/Rocky 8 cluster.

Only the most recent versions of each application have been migrated to RHEL8. You can see what versions are available by typing, as usual:

module avail

module avail appname
If your scripts specify loading an older version than is available, you may need to update your scripts. If you absolutely require an older version of an application, let us know.

If you have compiled your own versions of an application which link older libraries, you may need to recompile them on CentOS8. Performance-sensitive applications (e.g. parallel molecular simulations) should be recompiled.

Utilities, Partitions and Limits

The Biowulf utilities are available on the Rhel8 system. The 'freen' command will show only the nodes in the RHEL/Rocky 8 cluster.

biowulf8% freen
                                   .......Per-Node Resources......
Part.  FreeNds  FreeCPUs FreeGPUs Cores CPUs GPUs  Mem   Disk Features
-------------------------------------------------------------------------------------------
rhel8   0 / 1    56 / 64   0 / 4   32   64    4  247g  3200g cpu64,core32,g256,ssd3200,e7543p,ibhdr200,gpua100
rhel8   0 / 1    98 / 128          64  128       499g  3200g cpu128,core64,g512,ssd3200,e7543,ibhdr200,rhel8
rhel8   5 / 6    80 / 336          28   56       247g   400g cpu56,core28,g256,ssd400,x2695,ibfdr,rhel8
rhel8   1 / 1    72 / 72   4 / 4   36   72    4  373g  1600g cpu72,core36,g384,ssd1600,x6140,ibhdr,gpuv100x,rhel8
rhel8   1 / 1    72 / 72           36   72       373g  3200g cpu72,core36,g384,ssd3200,x6140,ibhdr100,rhel8
rhel8   1 / 1    56 / 56   4 / 4   28   56    4  121g   650g cpu56,core28,g128,ssd650,x2680,ibfdr,gpup100,rhel8
rhel8   1 / 1    56 / 56   4 / 4   28   56    4  247g   400g cpu56,core28,g256,ssd400,x2695,ibfdr,gpuk80,rhel8
rhel8   1 / 1    56 / 56   4 / 4   28   56    4  247g   800g cpu56,core28,g256,ssd800,x2680,ibfdr,gpuk80,rhel8
rhel8   1 / 1    56 / 56   4 / 4   28   56    4  121g   800g cpu56,core28,g128,ssd800,x2680,ibfdr,gpuv100,rhel8

Nodes may be added to the RHEL/Rocky 8 cluster as more beta users get on the system.

The usual Biowulf partitions (norm, multinode, gpu etc) do not exist on the test RHEL/Rocky 8 cluster. After the transition of the entire cluster, the usual partitions will all be present. During this test phase, if you want to submit to a particular type of node, use --constraint. e.g. if you normally submit to the multinode partition, you can use

sbatch --constraint=x2695 --ntasks=56 --ntasks-per-core=1 --nodes=2  jobscript
To submit to a GPU node
sbatch --gres=gpu:v100x:1 --cpus-per-task=8 jobscript

Batch system limits may be different (and changing during the beta phase!) on the RHEL/Rocky 8 cluster.

Python on Rocky8
There is no python/2.7 environment on rhel8.

User Cron jobs on Biowulf and Helix

Some users have cron jobs running on the Biowulf login node and on Helix. All cron jobs will be copied to the RHEL8 systems during the migration, but they will all be initially commented out. Each user will need to test their cron job on the RHEL8 system before enabling it.

Changes from Rhel7