cpac: Configurable Pipeline for the Analysis of Connectomes
A configurable, open-source, Nipype-based, automated processing pipeline for resting state fMRI data. Only command line version of 'cpac' singularity container is installed.
Documentation
Important Notes
- Module Name: cpac (see the modules page for more information)
- Please run "cpac run" as "cpac" instead. Since it was installed as a wrapper to run on singularity.
Interactive job
Interactive jobs should be used for debugging, graphics, or applications that cannot be run as batch jobs.
Allocate an interactive session and run the program. Sample session:
[user@biowulf]$ sinteractive --mem=4g --gres=lscratch:10 [user@cn3144 ~]$ module load cpac
[user@cn3144 ]$ cd /data/$USER [user@cn3144 ]$ cpac --help usage: run.py [-h] [--pipeline-file PIPELINE_FILE] [--group-file GROUP_FILE] [--data-config-file DATA_CONFIG_FILE] [--preconfig PRECONFIG] [--aws-input-creds AWS_INPUT_CREDS] [--aws-output-creds AWS_OUTPUT_CREDS] [--n-cpus N_CPUS] [--mem-mb MEM_MB] [--mem-gb MEM_GB] [--runtime-usage RUNTIME_USAGE] [--runtime-buffer RUNTIME_BUFFER] [--num-ants-threads NUM_ANTS_THREADS] [--random-seed RANDOM_SEED] [--save-working-dir [SAVE_WORKING_DIR]] [--fail-fast FAIL_FAST] [--participant-label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]] [--participant-ndx PARTICIPANT_NDX] [--T1w-label T1W_LABEL] [--bold-label BOLD_LABEL [BOLD_LABEL ...]] [-v] [--bids-validator-config BIDS_VALIDATOR_CONFIG] [--skip-bids-validator] [--anat-only] [--user_defined USER_DEFINED] [--tracking-opt-out] [--monitoring] bids_dir output_dir {participant,group,test_config,cli} C-PAC Pipeline Runner. Copyright (C) 2022 C-PAC Developers. This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. For details, see https://fcp-indi.github.io/docs/nightly/license or the COPYING and COPYING.LESSER files included in the source code. positional arguments: bids_dir The directory with the input dataset formatted according to the BIDS standard. Use the format s3://bucket/path/to/bidsdir to read data directly from an S3 bucket. This may require AWS S3 credentials specified via the --aws_input_creds option. output_dir The directory where the output files should be stored. If you are running group level analysis this folder should be prepopulated with the results of the participant level analysis. Use the format s3://bucket/path/to/bidsdir to write data directly to an S3 bucket. This may require AWS S3 credentials specified via the --aws_output_creds option. {participant,group,test_config,cli} Level of the analysis that will be performed. Multiple participant level analyses can be run independently (in parallel) using the same output_dir. test_config will run through the entire configuration process but will not execute the pipeline. options: -h, --help show this help message and exit --pipeline-file PIPELINE_FILE, --pipeline_file PIPELINE_FILE Path for the pipeline configuration file to use. Use the format s3://bucket/path/to/pipeline_file to read data directly from an S3 bucket. This may require AWS S3 credentials specified via the --aws_input_creds option. --group-file GROUP_FILE, --group_file GROUP_FILE Path for the group analysis configuration file to use. Use the format s3://bucket/path/to/pipeline_file to read data directly from an S3 bucket. This may require AWS S3 credentials specified via the --aws_input_creds option. The output directory needs to refer to the output of a preprocessing individual pipeline. ...
Batch job
Most jobs should be run as batch jobs.
Create a batch input file (e.g. cpac.sh) similar to the following example:
#! /bin/bash module load cpac || exit 1 cpac --n_cpus 20 --mem_gb 48 --num-ants-threads 10 /pathto/local_bids_data /pathto/some_folder_for_outputs participant --participant_label sub-02 sub-03
Submit this job using the Slurm sbatch command.
sbatch --cpus-per-task=22 --mem=48g --gres=lscratch:10 cpac.sh