High-Performance Computing at the NIH
GitHub YouTube @nih_hpc RSS Feed
ANTs on Biowulf & Helix

Advanced Normalization Tools (ANTs) extracts information from complex datasets that include imaging. ANTs extracts information from complex datasets that include imaging (Word Cloud). Paired with ANTsR (answer), ANTs is useful for managing, interpreting and visualizing multidimensional data. ANTs depends on the Insight ToolKit (ITK), a widely used medical image processing library to which ANTs developers contribute.

ANTs development is led by Brian Avants and supported by other researchers and developers at PICSL and other institutions. [ANTs website]

Running ANTs on Helix

The following example uses the 'large deformation' files from http://stnava.github.io/C/. The tar file is available on Biowulf, is unpacked into the user space, and then the sample script is run. (User input in bold)

Sample session:

helix$cd /data/$USER/ants

helix$ tar xvf /usr/local/apps/ANTs/examples/stnava-C-eeb4926.tar .

helix$ cd stnava-C-eeb4926

helix$ module load ANTs

helix$ ./c_example.sh
All_Command_lines_OK
Using double precision for computations.
  number of levels = 5
  fixed image: data/chalf.nii.gz
  moving image: data/c.nii.gz
  fixed image: data/chalf.nii.gz
  moving image: data/c.nii.gz
  [...etc...]
  WDIAGNOSTIC,    20, 1.045807126788e-05, 1.465870276407e-03, 1.7212e+02, 4.1302e-01,
  Elapsed time (stage 0): 2.1771e+02


Total elapsed time: 2.1771e+02
 Updated reader
 Dire in 1 0
0 -1

 Dire out 1 0
0 -1

helix$
After this run, the 'output' directory will contain the following files:
ex_0InverseWarp.nii.gz
ex_0Warp.nii.gz
ex_diff_inv.nii.gz
ex_diff.nii.gz
grid.nii.gz
jac_inv.nii.gz
jac.nii.gz

Running a single batch job on Biowulf

Set up a batch script along the following lines. This script copies one of the example datasets down to your own area, then runs the ANTs asymmetry command on the sample data.

#!/bin/bash
#
# this file is called myjob.bat
#
module load ANTs
cd /data/$USER/mydir
tar -xvf /usr/local/apps/ANTs/examples/stnava-asymmetry-f8ecc74.tar
cd stnava-asymmetry-f8ecc74
./asymmetry.sh -d 2 -a 0 -f data/symm_t.nii.gz -m data/asymm_s.nii.gz -o XXXX

Submit to the batch system with:

$ sbatch  myjob.bat

The command above will allocate 2 CPUs and 4 GB of memory to the job, which is sufficient for this test.

Some of the ANTs scripts and executables are multi-threaded. Several of them use the variable ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS to se the number of threads. Thus, you can set

export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=$SLURM_CPUS_PER_TASK
for these programs in your batch script, to ensure that the number of threads in ANTs matches the number of allocated CPUs.

Other ANTs programs allow for explicit setting of the number of threads. For example, the following ANTs scripts have a '-n' option to set the number of threads:

antsRegistrationSpaceTime.sh
antsRegistrationSyNQuick.sh
antsRegistrationSyN.sh
For these scripts, you can simply add -n $SLURM_CPUS_PER_TASK to set the number of threads equal to allocated CPUs.

Your job may require more than 4 GB of memory, so you may need to specify the memory when submitting. e.g.

$ sbatch --mem=5g myjob.bat
would allocate 5 GB of memory for your job.

 

Running a swarm of batch jobs on Biowulf

Set up a swarm command file (eg /data/$USER/cmdfile). Here is a sample file:

cd /path/to/mydir; ./asymmetry.sh -d 2 -a 0 -f data/sym.nii.1.gz -m data/asym.nii.1.gz -o out1
cd /path/to/mydir; ./asymmetry.sh -d 2 -a 0 -f data/sym.nii.2.gz -m data/asym.nii.2.gz -o out2
cd /path/to/mydir; ./asymmetry.sh -d 2 -a 0 -f data/sym.nii.3.gz -m data/asym.nii.3.gz -o out3
[...]   

Submit this job with

$ swarm -f cmdfile --module ANTs

If you have the environment variable ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS set, then some of the ANTs executables will utilize this variable to run in multi-threaded mode. You should then use the -t # flag to swarm to allocate the same number of CPUs. e.g.

export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=12
swarm -f  ANTSswarmfile -t 12 -g 40 --module ANTs

If each command requires more than 4 GB of memory, you must tell swarm the amount of memory required using the '-g #' flag to swarm. For example, if each command (a single line in the file above) requires 3.5 GB of memory, you would submit the swarm with:

$ swarm -g 8 -f cmdfile --module splicemap

Running an interactive job on Biowulf

Users may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

[user@biowulf]$ sinteractive -M 2 
      salloc.exe: Granted job allocation 1528
slurm stepprolog here!

[user@pXXXX]$ cd /data/$USER/myruns

[user@pXXXX]$ module load ANTs

[user@pXXXX]$ ./asymmetry.sh -d 2 -a 0 -f data/symm_t.nii.2.gz -m data/asymm_s.nii.2.gz -o XXXX
inputs: data/symm_t.nii.gz data/asymm_s.nii.gz XXXX 2
 CenterOfMass [133.032, 163.98]
Using double precision for computations.
Input scalar image: data/asymm_s.nii.gz
Reference image: data/asymm_s.nii.gz
[...]
 1DIAGNOSTIC,    30, 2.079582101904e-04, -4.725553011784e-04, 2.0881e+01, 5.1356e-01,
  Elapsed time (stage 0): 2.0949e+01


Total elapsed time: 2.0950e+01

[user@pXXXX] exit
slurm stepepilog here!
                      salloc.exe: Relinquishing job allocation 1528
salloc.exe: Job allocation 1528 has been revoked.

[user@biowulf]$ 

The command 'sinteractive' has several options:

$ sinteractive -h
Usage: sinteractive [-J job_name] [-p partition] [-c cpus] [-M mem | -m mem_per_cpu] [-x]

Optional arguments:
-J job name (default: my_interactive_job)
-p partition to run job in (default: interactive)
-c number of CPU cores required (default: 1)
-M memory (GB) required (default: 1)
-m memory (GB) per core required
-x enable X11 forwarding (default: disabled)


Documentation

ANTs website