High-Performance Computing at the NIH
GitHub YouTube @nih_hpc RSS Feed
Freesurfer on Biowulf & Helix

FreeSurfer is a set of automated tools for reconstruction of the brain's cortical surface from structural MRI data, and overlay of functional MRI data onto the reconstructed surface. It was developed at the Martinos Center for Biological Imaging at Harvard. FreeSurfer website.

FreeSurfer is not a parallel program. The advantage of running on Biowulf is that you can run many simultaneous freesurfer runs.

The GUI-based FreeSurfer programs should be run by allocating an interactive node (as described below) or on Helix.

FreeSurfer environment:
The easiest way to see what versions are available is by using the module commands as in the example below.

biowulf% module avail freesurfer

------------------ /usr/local/lmod/modulefiles ----------------
freesurfer/5.3.0

biowulf% module load freesurfer/5.3.0

biowulf% module list
Currently Loaded Modulefiles:
  1) freesurfer/5.3.0

The freesurfer initialization scripts need to be run in addition to loading the module. If you expect to be running Freesurfer frequently, it's probably best to add these lines to your ~/.bashrc. In the following example, the user has chosen to use Freesurfer 5.3.0

#.bashrc

# User specific aliases and functions

[...]
module load freesurfer/5.3.0 > /dev/null 2>&1 ; source $FREESURFER_HOME/SetUpFreeSurfer.sh

The redirection to /dev/null will prevent the informational messages from being printed out on your terminal or output file.

On Helix

In this sample session, the SUBJECTS_DIR environmental variable refers to a directory in /usr/local/apps. You should set this variables to refer to a directory in /home/$USER or /data/$USER instead. Sample session:

[susanc@helix ~]$ module load freesurfer
Bash users should now type: 
 source $FREESURFER_HOME//SetUpFreeSurfer.sh
Csh/tcsh users should now type:
source $FREESURFER_HOME/SetUpFreeSurfer.csh 
[susanc@helix ~]$ source $FREESURFER_HOME//SetUpFreeSurfer.sh
-------- freesurfer-Linux-centos4_x86_64-stable-pub-v5.3.0 --------
Setting up environment for FreeSurfer/FS-FAST (and FSL)
FREESURFER_HOME   /usr/local/apps/freesurfer/5.3
FSFAST_HOME       /usr/local/apps/freesurfer/5.3/fsfast
FSF_OUTPUT_FORMAT nii.gz
SUBJECTS_DIR      /usr/local/apps/freesurfer/5.3/subjects
MNI_DIR           /usr/local/apps/freesurfer/5.3/mni
FSL_DIR           /usr/local/apps/fsl/5.0/fsl

[susanc@helix ~]$ tkmedit bert orig.mgz
Setting subject to bert
Reading 0 control points...
Reading 0 control points...
Reading /usr/local/freesurfer/lib/tcl/tkm_common.tcl
Reading /usr/local/freesurfer/lib/tcl/tkm_wrappers.tcl
Reading /usr/local/freesurfer/lib/tcl/fsgdfPlot.tcl
Reading /usr/local/freesurfer/lib/tcl/tkUtils.tcl



[susanc@helix ~] 

Batch job on Biowulf

Create a batch script along the following lines:

#!/bin/bash
#  this script is fs.bat

export FREESURFER_HOME=/usr/local/freesurfer   
source $FREESURFER_HOME//SetUpFreeSurfer.sh

cd /data/user/mydir
recon-all -subject hv1 -all

Submit this job to the batch system with

biowulf% qsub -l nodes=1 fs.bat 

Swarm of independent jobs on Biowulf

The following example demonstrates a recon-all job run via the swarm command on Biowulf. (Thanks to Nikhil Sharma at NINDS for this example.)

For swarm jobs, the Freesurfer setup needs to be in your .bashrc file, as in the example at the top of this page. In the working directory for this project (e.g. /data/user/freesurfer), create a directory called subjects, and create a subdirectory for each subject. See the Freesurfer documentation for a detailed explanation of the directories and files required and created during the recon-all job.

In the working directory, create a swarm command file, e.g. freesurfer.swarm, containing the commands you wish to run on each subject.

cd /data/user/freesurfer/; recon-all -subject hv1 -all
cd /data/user/freesurfer/; recon-all -subject hv2 -all
cd /data/user/freesurfer/; recon-all -subject hv3 -all
[...]
with one line for each subject (hv1, hv2, hv3 etc.).

Submit this swarm set with the command:

swarm -f freesurfer.swarm

If each freesurfer job requires more than 4 GB of memory, use

swarm -g # -f freesurfer.swarm
where '#' is the number of Gigabytes of memory required by a single freesurfer process. See the swarm documentation for details.

Running an interactive job on Biowulf

Allocate an interactive node with sinteractive --x11. Once you are logged in to the node, run Freesurfer exactly as in the Helix example above.

Documentation

Freesurfer Wiki