Big Data Genomics ADAM Pipe API wrappers for bioinformatics tools.
Allocate an interactive session and run the program. Sample session:
[user@biowulf]$ sinteractive --mem 20g -c 8 salloc.exe: Pending job allocation 46116226 salloc.exe: job 46116226 queued and waiting for resources salloc.exe: job 46116226 has been allocated resources salloc.exe: Granted job allocation 46116226 salloc.exe: Waiting for resource configuration salloc.exe: Nodes cn3144 are ready for job [user@cn3144 ~]$ module load cannoli [user@cn3144 ~]$ cannoli-submit \ -- \ bwa \ sample.unaligned.fragments.adam \ sample.bwa.hg38.alignments.adam \ sample \ -index hg38.fa \ -sequence_dictionary hg38.dict \ -fragments \ -add_indices [user@cn3144 ~]$ exit salloc.exe: Relinquishing job allocation 46116226 [user@biowulf ~]$
Create a batch input file (e.g. cannoli.sh). For example:
#!/bin/bash module load cannoli cannoli-submit -- bwa sample.unaligned.fragments.adam sample.bwa.hg38.alignments.adam sample \ -index hg38.fa -sequence_dictionary hg38.dict -fragments -add_indices
Submit this job using the Slurm sbatch command.
sbatch --cpus-per-task=8 --mem=16g cannoli.sh
Create a swarmfile (e.g. cannoli.swarm). For example:
cannoli-submit -- bwa sample1.adam [...] cannoli-submit -- bwa sample2.adam [...] cannoli-submit -- bwa sample3.adam [...]
Submit this job using the swarm command.
swarm -f cannoli.swarm -g 20 -t 8 --module cannoliwhere
-g # | Number of Gigabytes of memory required for each process (1 line in the swarm command file) |
-t # | Number of threads/CPUs required for each process (1 line in the swarm command file). |
--module cannoli | Loads the cannoli module for each subjob in the swarm |