Biowulf High Performance Computing at the NIH

SPRING (Single Particle Reconstruction from Images of kN own Geometry) is a single-particle based helical reconstruction package for electron cryo-micrographs and has been used to determine 3D structures of a variety of highly ordered and less ordered specimens.


To use, type

module load emspring/[ver]

where [ver] is the version of choice.

Environment variables set:

NOTE: The Slurm environment variable SLURM_JOB_NODELIST must be unset prior to running with OpenMPI. This prevents OpenMPI from doing things not expected by the emspring executables. This is done when the emspring module is loaded.

Interactive job on Biowulf

Once an interactive session has been started, load the module and type 'spring':

[node]$ module load emspring
[node]$ spring

This application requires an X-Windows connection when run in interactive mode.

Batch job on Biowulf

Create a batch input file (e.g. All emspring executables can use an input parameter file 'parameters.par'. For various reasons, it is often best to generate the parameter file on the fly. For example, here is a batch script for a non-MPI command that runs on a single node:


# Create a comma-delimited list of the mrc files within the directory
mrcdir="/fdb/app_testdata/cryoEM/EMPIAR-10081/micrographs/"  # This directory is where the mrc images are
array=($(ls $mrcdir/*.mrc))
list=$(printf ",%s" "${array[@]}")

# Calculate the total number of CPUs allocated
totalcpu=$(($(echo ${SLURM_CPUS_PER_TASK:=1})*$(echo ${SLURM_NTASKS:=1})))

# Write the parameter file
echo "
Micrographs                 = ${list}
Diagnostic plot pattern     = micexam_diag.pdf
Pixel size in Angstrom      = 1.062
Binning option              = True
Binning factor              = 3
MPI option                  = False
Number of CPUs              = ${totalcpu}
Temporary directory         = /lscratch/${SLURM_JOB_ID}
" > parameters.par

# Load the emspring module
module load emspring

# Run the command
micexam --f parameters.par

Submit this job using the Slurm sbatch command, allocating enough memory, cpus, local scratch space, and time:

sbatch --cpus-per-task=16 --mem=16g --gres=lscratch:100 --time=4:00:00

Some SPRING commands can use MPI to distribute the work among multiple nodes. For a multinode job, set the MPI option to True:

MPI option                  = True

Specify --ntasks, set --mem-per-cpu instead of --mem, and the correct partition as well. In addition, setting --ntasks-per-core=1 has been found to eliminate the chance of MPI ranks failing.

sbatch --ntasks=128 --ntasks-per-core=1 --cpus-per-task=1 --mem-per-cpu=4g --gres=lscratch:100 --time=4:00:00 --partition=multinode