STAR-SEQR is a pipeline, using the STAR genome aligner, that is detects gene fusion,
-t
flag on the command line, followed by the number of threads desired, e.g. -t 8
. The developers recommend using between 4 and 12 threads.
Allocate an interactive session and run the program.
Sample session (user input in bold):
[user@biowulf]$ sinteractive salloc.exe: Pending job allocation 46116226 salloc.exe: job 46116226 queued and waiting for resources salloc.exe: job 46116226 has been allocated resources salloc.exe: Granted job allocation 46116226 salloc.exe: Waiting for resource configuration salloc.exe: Nodes cn3144 are ready for job [user@cn3144 ~]$ module load starseqr [user@cn3144 ~]$ starseqr.py usage: starseqr.py [-h] -1 FASTQ1 -2 FASTQ2 [-i STAR_INDEX] [-m {0,1}] [-sj STAR_JXNS] [-ss STAR_SAM] [-sb STAR_BAM] -p PREFIX -r FASTA -g GTF [-l LIBRARY] [-t THREADS] [-b BED_FILE] [--subset_type {either,both}] [-a {velvet}] [--keep_dups] [--keep_gene_dups] [--keep_mito] [-v] [user@cn3144 ~]$ exit salloc.exe: Relinquishing job allocation 46116226 [user@biowulf ~]$
Create a batch input file (e.g. StarSeqr.sh). For example:
#!/bin/bash set -e module load starseqr starseqr.py -1 seq1.fastq.gz -2 seq2.fastq.gz > StarSeqr.out
Submit this job using the Slurm sbatch command.
sbatch [--cpus-per-task=#] [--mem=#] StarSeqr.sh
Create a swarmfile (e.g. starseqr.swarm). For example:
starseqr.py -1 seq1.fastq.gz -2 seq2.fastq.gz > StarSeqr_1.out starseqr.py -1 seq3.fastq.gz -2 seq4.fastq.gz > StarSeqr_2.out starseqr.py -1 seq5.fastq.gz -2 seq6.fastq.gz > StarSeqr_3.out starseqr.py -1 seq7.fastq.gz -2 seq8.fastq.gz > StarSeqr_4.out
Submit this job using the swarm command.
swarm -f starseqr.swarm [-g #] [-t #] --module starseqrwhere
-g # | Number of Gigabytes of memory required for each process (1 line in the swarm command file) |
-t # | Number of threads/CPUs required for each process (1 line in the swarm command file). |
--module starseqr | Loads the starseqr module for each subjob in the swarm |