MRIQC on Biowulf

MRIQC (MRI quality control) is an application that automatically extracts image quality metrics from MRI scans and generates individual reports. MRIQC can fit a classifier to categorize datasets into "accept" or "exclude" categories.

References:


IMPORTANT: (October 2021) The memory and cpu limit flags in mriqc do not work as intended.
The flags that are used to select an upper bound for mriqc memory processes and CPUs do not work as intended. If an mriqc job exceeds its memory allocation, the job will hang and D processes will be generated in the compute node where the job is running. We recommend carefully profiling memory consumption for a given type of dataset by running small-scale jobs, then gradually increasing the number of jobs when specific memory requirements have been established. Regarding the CPU flag not working, please use the --exclusive flag if using sbatch or -t auto if using swarm. Please get in touch with HPC staff if you need help with profiling memory requirements or allocating CPUs.

Documentation
Important Notes

Interactive job
Interactive jobs should be used for debugging, graphics, or applications that cannot be run as batch jobs.

Allocate an interactive session and run the program.
Sample session (user input in bold):

[user@biowulf]$ sinteractive
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job

[user@cn3144 ~]$ module load mriqc

[user@cn3144 ~]$ mriqc -h

[user@cn3144 ~]$ exit
salloc.exe: Relinquishing job allocation 46116226
[user@biowulf ~]$

Batch job
Most jobs should be run as batch jobs.

Create a batch input file (e.g. mriqc.sh). For example (using example data):

#!/bin/bash
#SBATCH --job-name=mriqc
#SBATCH --gres=lscratch:20
#SBATCH --exclusive
#SBATCH --mem=16g
#SBATCH --time=72:00:00

module load mriqc/0.16.1

tar -C /lscratch/${SLURM_JOB_ID} -xf /usr/local/apps/mriqc/TEST_DATA/ds001.tar.gz

mriqc /lscratch/${SLURM_JOB_ID}/ds001 /lscratch/${SLURM_JOB_ID}/mriqc.out.ds001 \
      participant --participant_label sub-01 -w /lscratch/${SLURM_JOB_ID} \
      --no-sub 

Submit this job using the Slurm sbatch command.

sbatch mriqc.sh
Swarm of Jobs
A swarm of jobs is an easy way to submit a set of independent commands requiring identical resources.

Create a swarmfile (e.g. mriqc.swarm). For example:

mriqc /data/${USER}/BIDS-dataset/ds001/ /data/$USER/BIDS-dataset/mriqc.out.ds001 \
      participant --participant_label sub-01 -w /lscratch/${SLURM_JOB_ID} --no-sub \
mriqc /data/${USER}/BIDS-dataset/ds002/ /data/$USER/BIDS-dataset/mriqc.out.ds002 \
      participant --participant_label sub-02 -w /lscratch/${SLURM_JOB_ID} --no-sub \
mriqc /data/${USER}/BIDS-dataset/ds003/ /data/$USER/BIDS-dataset/mriqc.out.ds003 \
      participant --participant_label sub-03 -w /lscratch/${SLURM_JOB_ID} --no-sub \

Submit this job using the swarm command.

swarm -f mriqc.swarm [--gres=lscratch:#] [-g #] -t auto --module mriqc
where
-gres=lscratch:# Number of Gigabytes of local disk space allocated per process (1 line in the swarm command file)
-g # Number of Gigabytes of memory required for each process (1 line in the swarm command file)
-t # Number of threads/CPUs required for each process (1 line in the swarm command file). We set this to auto to allocate all CPUs in each node.
--module mriqc Loads the mriqc module for each subjob in the swarm