Shapeit on HPC
SHAPEIT is a fast and accurate method for estimation
of haplotypes (aka phasing) from genotype or sequencing data.
SHAPEIT has several notable features:
- Linear complexity with the number of SNPs and conditioning haplotypes.
- Whole chromosome GWAS scale datasets can be phased in a single run.
- Phasing individuals with any level of relatedness
- Phasing is multi-threaded to tailor computational times to your resources
- Handles X chromosomes phasing
- Phasing using a reference panel (eg.1,000 Genomes) to aid phasing
- Ideal for pre-phasing imputation together with IMPUTE2
References:
- SHAPEIT has primarily been developed by Dr Olivier Delaneau through a collaborative project between the research groups of Prof Jean-Francois Zagury at CNAM and Prof Jonathan Marchini at Oxford. Funding for this project has been received from several sources : CNAM, Peptinov, MRC, Leverhulme, The Wellcome Trust.
Documentation
- Module Name: shapeit (see the modules page for more information)
- Multithreaded
- Example files in /usr/local/apps/shapeit/test
Interactive job
Interactive jobs should be used for debugging, graphics, or applications that cannot be run as batch jobs.
Allocate an interactive session and run the program. Sample session:
[user@biowulf]$ sinteractive --cpus-per-task=4 salloc.exe: Pending job allocation 46116226 salloc.exe: job 46116226 queued and waiting for resources salloc.exe: job 46116226 has been allocated resources salloc.exe: Granted job allocation 46116226 salloc.exe: Waiting for resource configuration salloc.exe: Nodes cn3144 are ready for job [user@cn3144 ~]$ module load shapeit [user@cn3144 ~]$ phase_common --input wgs/target.unrelated.bcf --filter-maf 0.001 --region 1 --map info/chr1.gmap.gz --output tmp/target.scaffold.bcf --thread $SLURM_CPUS_PER_TASK [user@cn3144 ~]$ exit salloc.exe: Relinquishing job allocation 46116226 [user@biowulf ~]$
Batch job
Most jobs should be run as batch jobs.
Create a batch input file (e.g. batch.sh). For example:
#!/bin/bash set -e module load shapeit phase_common --input wgs/target.unrelated.bcf --filter-maf 0.001 --region 1 --map info/chr1.gmap.gz --output tmp/target.scaffold.bcf --thread $SLURM_CPUS_PER_TASK
Submit this job using the Slurm sbatch command.
sbatch --cpus-per-task=4 [--mem=#] batch.sh
Swarm of Jobs
A swarm of jobs is an easy way to submit a set of independent commands requiring identical resources.
Create a swarmfile (e.g. job.swarm). For example:
cd dir1; phase_common ... --thread $SLURM_CPUS_PER_TASK cd dir2; phase_common ... --thread $SLURM_CPUS_PER_TASK cd dir3; phase_common ... --thread $SLURM_CPUS_PER_TASK
Submit this job using the swarm command.
swarm -f job.swarm [-g #] -t 4 --module shapeitwhere
-g # | Number of Gigabytes of memory required for each process (1 line in the swarm command file) |
-t # | Number of threads/CPUs required for each process (1 line in the swarm command file). |
--module | Loads the module for each subjob in the swarm |