Pindel can detect breakpoints of large deletions, medium sized insertions, inversions, tandem duplications and other structural variants at single-based resolution from next-gen sequence data. It uses a pattern growth approach to identify the breakpoints of these variants from paired-end short reads.
Allocate an interactive session and run the program. Sample session:
[user@biowulf]$ sinteractive salloc.exe: Pending job allocation 46116226 salloc.exe: job 46116226 queued and waiting for resources salloc.exe: job 46116226 has been allocated resources salloc.exe: Granted job allocation 46116226 salloc.exe: Waiting for resource configuration salloc.exe: Nodes cn3144 are ready for job [user@cn3144 ~]$ module load pindel [user@cn3144 ~]$ cd /data/$USER/; cp ${PINDEL_TEST_DATA:-none}/* . [user@cn3144 ~]$ pindel -i simulated_config.txt -f simulated_reference.fa -o outfile -c ALL [user@cn3144 ~]$ exit salloc.exe: Relinquishing job allocation 46116226 [user@biowulf ~]$
Create a batch input file (e.g. batch.sh). For example:
#!/bin/bash set -e module load pindel pindel -i simulated_config.txt -f simulated_reference.fa -o outfile -c ALL
Submit this job using the Slurm sbatch command.
sbatch [--mem=#] batch.sh
Create a swarmfile (e.g. job.swarm). For example:
cd dir1; pindel -i simulated_config.txt -f simulated_reference.fa -o outfile -c ALL cd dir2; pindel -i simulated_config.txt -f simulated_reference.fa -o outfile -c ALL cd dir3; pindel -i simulated_config.txt -f simulated_reference.fa -o outfile -c ALL
Submit this job using the swarm command.
swarm -f job.swarm [-g #] --module pindelwhere
-g # | Number of Gigabytes of memory required for each process (1 line in the swarm command file) |
--module | Loads the module for each subjob in the swarm |