SQANTI3 on Biowulf

Quality control of Long-read transcriptomes.

SQANTI3 is the newest version of the SQANTI tool that merges features from SQANTI and SQANTI2, together with new additions. SQANTI3 is the first module of the Functional IsoTranscriptomics (FIT) framework, which also includes IsoAnnot and tappAS.

References:

Documentation
Important Notes

Interactive job
Interactive jobs should be used for debugging, graphics, or applications that cannot be run as batch jobs.

Allocate an interactive session and run the program.
Sample session following the tutorial (user input in bold):

[user@biowulf]$ sinteractive --cpus-per-task 8
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job

[user@cn3144 ~]$ module load sqanti3

[user@cn3144 ~]$ ln -s  $SQANTI3_HOME/example . # program expects to find other files in a relative path here

[user@cn3144 ~]$ 
    sqanti3_qc.py \
    example/UHR_chr22.gtf \
    example/gencode.v38.basic_chr22.gtf \
    example/GRCh38.p13_chr22.fasta \
    --CAGE_peak $SQANTI3_HOME/data/ref_TSS_annotation/human.refTSS_v3.1.hg38.bed \
    --polyA_motif_list $SQANTI3_HOME/data/polyA_motifs/mouse_and_human.polyA_motif.txt \
    -o UHR_chr22 \
    -fl example/UHR_abundance.tsv \
    --short_reads example/UHR_chr22_short_reads.fofn \
    --cpus $SLURM_CPUS_PER_TASK \
    --report both

[user@cn3144 ~]$ exit
salloc.exe: Relinquishing job allocation 46116226
[user@biowulf ~]$

Batch job
Most jobs should be run as batch jobs.

Create a batch input file (e.g. sqanti3.sh). For example:

#!/bin/bash
set -e
module load sqanti3

 sqanti3_qc.py \
   example/UHR_chr22.gtf \
   example/gencode.v38.basic_chr22.gtf \
   example/GRCh38.p13_chr22.fasta \
   --CAGE_peak $SQANTI3_HOME/data/ref_TSS_annotation/human.refTSS_v3.1.hg38.bed \
   --polyA_motif_list $SQANTI3_HOME/data/polyA_motifs/mouse_and_human.polyA_motif.txt \
   -o UHR_chr22 \
   -fl example/UHR_abundance.tsv \
   --short_reads example/UHR_chr22_short_reads.fofn \
   --cpus $SLURM_CPUS_PER_TASK \
   --report both

Submit this job using the Slurm sbatch command.

sbatch [--cpus-per-task=#] [--mem=#] sqanti3.sh
Swarm of Jobs
A swarm of jobs is an easy way to submit a set of independent commands requiring identical resources.

Create a swarmfile (e.g. sqanti3.swarm). For example:

sqanti3_qc.py isoforms1.gtf ref_annotation.gtf ref_genome.fa -o sample1 -d out1 -t $SLURM_CPUS_PER_TASK
sqanti3_qc.py isoforms2.gtf ref_annotation.gtf ref_genome.fa -o sample2 -d out1 -t $SLURM_CPUS_PER_TASK
sqanti3_qc.py isoforms3.gtf ref_annotation.gtf ref_genome.fa -o sample3 -d out1 -t $SLURM_CPUS_PER_TASK
sqanti3_qc.py isoforms4.gtf ref_annotation.gtf ref_genome.fa -o sample4 -d out1 -t $SLURM_CPUS_PER_TASK

Submit this job using the swarm command.

swarm -f sqanti3.swarm [-g #] [-t #] --module sqanti3
where
-g # Number of Gigabytes of memory required for each process (1 line in the swarm command file)
-t # Number of threads/CPUs required for each process (1 line in the swarm command file).
--module sqanti3 Loads the SQANTI3 module for each subjob in the swarm