HLA-LA on Biowulf

HLA*LA carries out HLA typing based on a population reference graph and employs a new linear projection method to align reads to the graph. Previously called HLA*PRG:LA, the application was developed by Alexander Dilthey at NHGRI.

Reference:

Documentation
Important Notes

Interactive job
Interactive jobs should be used for debugging, graphics, or applications that cannot be run as batch jobs.

Allocate an interactive session and run the program. Sample session:

[user@biowulf]$ sinteractive --gres=lscratch:50 --cpus-per-task=8 --mem=60g 
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job

[user@cn3144 ~]$ module load HLA-LA
[+] HLA-PRG-LA is called HLA-LA as of Jan 2019. Either name can be used when loading the module
[+] Loading HLA-LA  1.0.1 on cn3144
[..]

[user@cn3144 ~]$ cd /lscratch/$SLURM_JOBID

[user@cn3144 ~]$ cp $HLA_LA_TESTDATA/NA12878.mini.cram  .

[user@cn3144 ~]$ samtools index NA12878.mini.cram

[user@cn3144 ~]$ HLA-LA.pl --BAM NA12878.mini.cram \
      --graph PRG_MHC_GRCh38_withIMGT --sampleID NA12878 \
	    --maxThreads 7 --workingDir .
Identified paths:
    samtools_bin: /usr/local/apps/samtools/1.9/bin/samtools
    bwa_bin: /usr/local/apps/bwa/0.7.17/bwa
    java_bin: /usr/bin/java
    picard_sam2fastq_bin: /usr/local/apps/picard/2.9.2/build/libs/picard.jar
    General working directory: /lscratch/43090316
    Sample-specific working directory: /lscratch/43090316/NA12878

Extract reads from 534 regions...
Extract unmapped reads...
Merging...
Indexing...
Extract FASTQ...
        /usr/bin/java -Xmx10g -XX:-UseGCOverheadLimit -jar ...
Now executing:
../bin/HLA-LA --action HLA --maxThreads 4 --sampleID NA12878 ...
Set maxThreads to 4

[...]

[user@cn3144 ~]$ exit
salloc.exe: Relinquishing job allocation 46116226
[user@biowulf ~]$

Batch job
Most jobs should be run as batch jobs.

Create a batch input file (e.g. HLA.sh). For example:

#!/bin/bash
set -e
cd /lscratch/$SLURM_JOBID
module load HLA-LA

cp /data/$USER/myfile.cram .
samtools index myfile.cram

cpus=$(( SLURM_CPUS_PER_TASK - 1 ))
echo "Running on $cpus CPUs"
HLA-LA.pl --BAM myfile.cram --graph PRG_MHC_GRCh38_withIMGT --sampleID myfile --maxThreads $cpus --workingDir .

# copy output from /lscratch back to /data area
cp -r myfile/hla  /data/$USER/

Submit this job using the Slurm sbatch command.

sbatch --cpus-per-task=32 --mem=100g --gres=lscratch:100 --time=1-00:00:00 HLA.sh