Biowulf High Performance Computing at the NIH

MPI

Parallel applications on Biowulf typically use MPI as the means of inter-process communication across our various network interconnects. MPI is an application programming interface specification that currently exists in two major versions: MPI1 and MPI2. These APIs are implemented by a number vendors and projects.

The Biowulf staff maintains some popular MPI implementations for the convenience of our users. OpenMPI is an excellent MPI implementation that covers all of the high-performance networks available on Biowulf (Infiniband, Infinipath and Gigabit Ethernet), MPICH is a very popular and mature implementation for message passing over Ethernet networks and MVAPICH is MPICH with an additional Infiniband network target.

OpenMPI

OpenMPI is an excellent MPI implementation with plenty of options and capabilities while being generally quite easy to use. A binary built using OpenMPI can be used on any of Biowulf's high-performance networks regardless of the target network used during build time. This is because the target network is chosen at run-time. For this reason, it is not possible to build static MPI binaries using the OpenMPI compiler wrappers. The best source for documentation on OpenMPI comes from the project website.

To list all available Compiler and OpenMPI combinations, run:

module avail openmpi

MVAPICH2

MVAPICH2 is an implementation of the MPI-3 specification. The best source for documentation on MVAPICH2 comes from the project website.

To list all available Compiler and MVAPICH2 combinations, run:

module avail mvapich2

MPICH2

MPICH is a high performance implementation of the MPI specification. The best source for documentation on MPICH comes from the project website.

To list all available Compiler and MPICH combinations, run:

module avail mpich2