A collection of tools for the analysis of CpG/5mC data from PacBio HiFi reads aligned to a reference genome (e.g., an aligned BAM). To use these tools, the HiFi reads should already contain 5mC base modification tags, generated on-instrument or by using primrose. The aligned BAM should also be sorted and indexed.
Allocate an interactive session and run the program.
Sample session (user input in bold):
[user@biowulf]$ sinteractive --mem=32G -c8 --gres=lscratch:20 salloc.exe: Pending job allocation 46116226 salloc.exe: job 46116226 queued and waiting for resources salloc.exe: job 46116226 has been allocated resources salloc.exe: Granted job allocation 46116226 salloc.exe: Waiting for resource configuration salloc.exe: Nodes cn3144 are ready for job [user@cn3144 ~]$ module load pb-cpg-tools [user@cn3144 ~]$ cd /lscratch/$SLURM_JOB_ID [user@cn3144 ~]$ cp $CPG_TEST_DATA/*bam* . [user@cn3144 ~]$ aligned_bam_to_cpg_scores \ --bam HG002.GRCh38.haplotagged.truncated.bam \ --output-prefix test \ --model $CPG_PILEUP_MODEL/pileup_calling_model.v1.tflite \ --threads $SLURM_CPUS_PER_TASK [2023-07-24][14:31:38][aligned_bam_to_cpg_scores][INFO] Starting aligned_bam_to_cpg_scores [2023-07-24][14:31:38][aligned_bam_to_cpg_scores][INFO] cmdline: aligned_bam_to_cpg_scores --bam HG002.GRCh38.haplotagged.truncated.bam --output-prefix test --model /usr/local/apps/pb-cpg-tools/2.3.1/models/pileup_calling_model.v1.tflite --threads 8 [2023-07-24][14:31:38][aligned_bam_to_cpg_scores][INFO] Running on 8 threads [2023-07-24][14:31:38][aligned_bam_to_cpg_scores][INFO] Processing alignment file 'HG002.GRCh38.haplotagged.truncated.bam' [2023-07-24][14:32:16][aligned_bam_to_cpg_scores][INFO] Finished processing alignment files. [2023-07-24][14:32:16][aligned_bam_to_cpg_scores][INFO] Writing hap2 site methylation to bed file: 'test.hap2.bed' [2023-07-24][14:32:16][aligned_bam_to_cpg_scores][INFO] Writing hap1 site methylation to bed file: 'test.hap1.bed' [2023-07-24][14:32:16][aligned_bam_to_cpg_scores][INFO] Writing combined site methylation to bed file: 'test.combined.bed' [2023-07-24][14:32:16][aligned_bam_to_cpg_scores][INFO] Writing combined site methylation to bigwig file: 'test.combined.bw' [2023-07-24][14:32:16][aligned_bam_to_cpg_scores][INFO] Writing hap2 site methylation to bigwig file: 'test.hap2.bw' [2023-07-24][14:32:16][aligned_bam_to_cpg_scores][INFO] Writing hap1 site methylation to bigwig file: 'test.hap1.bw' [2023-07-24][14:32:17][aligned_bam_to_cpg_scores][INFO] aligned_bam_to_cpg_scores completed. Total Runtime: 00:00:39.426 [user@cn3144 ~]$ exit salloc.exe: Relinquishing job allocation 46116226 [user@biowulf ~]$
Create a batch input file (e.g. pb-cpg-tools.sh). For example:
#!/bin/bash set -e module load pb-cpg-tools cd /lscratch/$SLURM_JOB_ID cp $CPG_TEST_DATA/*bam* . aligned_bam_to_cpg_scores \ --bam HG002.GRCh38.haplotagged.truncated.bam \ --output-prefix test \ --model $CPG_PILEUP_MODEL/pileup_calling_model.v1.tflite \ --threads $SLURM_CPUS_PER_TASK
Submit this job using the Slurm sbatch command.
sbatch --mem=32g --cpus-per-task=8 --gres=lscratch:20 pb-cpg-tools.sh
Create a swarmfile (e.g. pb-cpg-tools.swarm). For example:
aligned_bam_to_cpg_scores --bam input1.bam --output-prefix out1 --model v1.tflite --threads $SLURM_CPUS_PER_TASK aligned_bam_to_cpg_scores --bam input2.bam --output-prefix out2 --model v1.tflite --threads $SLURM_CPUS_PER_TASK aligned_bam_to_cpg_scores --bam input3.bam --output-prefix out3 --model v1.tflite --threads $SLURM_CPUS_PER_TASK aligned_bam_to_cpg_scores --bam input4.bam --output-prefix out4 --model v1.tflite --threads $SLURM_CPUS_PER_TASK
Submit this job using the swarm command.
swarm -f pb-cpg-tools.swarm [-g #] [-t #] [--gres=lscratch:#] --module pb-cpg-toolswhere
-g # | Number of Gigabytes of memory required for each process (1 line in the swarm command file) |
-t # | Number of threads/CPUs required for each process (1 line in the swarm command file). |
--gres=lscratch:# | lscratch amount in GB allocated for each process (1 line in the swarm command file). |
--module pb-cpg-tools | Loads the pb-cpg-tools module for each subjob in the swarm |