A collection of tools for the analysis of CpG/5mC data from PacBio HiFi reads aligned to a reference genome (e.g., an aligned BAM). To use these tools, the HiFi reads should already contain 5mC base modification tags, generated on-instrument or by using primrose. The aligned BAM should also be sorted and indexed.
Allocate an interactive session and run the program.
Sample session (user input in bold):
[user@biowulf]$ sinteractive --mem=32G -c8 --gres=lscratch:20 salloc.exe: Pending job allocation 46116226 salloc.exe: job 46116226 queued and waiting for resources salloc.exe: job 46116226 has been allocated resources salloc.exe: Granted job allocation 46116226 salloc.exe: Waiting for resource configuration salloc.exe: Nodes cn3144 are ready for job [user@cn3144 ~]$ module load pb-cpg-tools [+] Loading singularity 3.8.5-1 on cn0847 [+] Loading pb-cpg-tools 1.1.0 ... [user@cn3144 ~]$ cd /lscratch/$SLURM_JOB_ID [user@cn3144 ~]$ cp -r $CPG_PILEUP_MODEL . [user@cn3144 ~]$ cp $CPG_TEST_DATA/*bam* . [user@cn3144 ~]$ aligned_bam_to_cpg_scores.py -t $SLURM_CPUS_PER_TASK \ -o test -p model \ -d /lscratch/$SLURM_JOB_ID/pileup_calling_model \ -b HG002.GRCh38.haplotagged.truncated.bam \ -f /fdb/igenomes/Homo_sapiens/NCBI/GRCh38/Sequence/WholeGenomeFasta/genome.fa Chunking regions for multiprocessing. Running multiprocessing on 6,362 chunks. 100%|███████████████████████████████████| 6362/6362 [05:12<00:00, 20.33it/s] Finished multiprocessing. Writing bed files. Writing bigwig files. Finished. [user@cn3144 ~]$ exit salloc.exe: Relinquishing job allocation 46116226 [user@biowulf ~]$
Create a batch input file (e.g. pb-cpg-tools.sh). For example:
#!/bin/bash set -e module load pb-cpg-tools cd /lscratch/$SLURM_JOB_ID cp -r $CPG_PILEUP_MODEL cp $CPG_TEST_DATA/*bam* . pb-cpg-tools -t $SLURM_CPUS_PER_TASK \ -o test -p model \ -d /lscratch/$SLURM_JOB_ID/pileup_calling_model \ -b HG002.GRCh38.haplotagged.truncated.bam \ -f /fdb/igenomes/Homo_sapiens/NCBI/GRCh38/Sequence/WholeGenomeFasta/genome.fa
Submit this job using the Slurm sbatch command.
sbatch --mem=32g --cpus-per-task=8 --gres=lscratch:20 pb-cpg-tools.sh
Create a swarmfile (e.g. pb-cpg-tools.swarm). For example:
pb-cpg-tools -t $SLURM_CPUS_PER_TASK -o out1 -p model -d calling_model/ -f genome.fa -b in1.bam pb-cpg-tools -t $SLURM_CPUS_PER_TASK -o out2 -p model -d calling_model/ -f genome.fa -b in2.bam pb-cpg-tools -t $SLURM_CPUS_PER_TASK -o out3 -p model -d calling_model/ -f genome.fa -b in3.bam pb-cpg-tools -t $SLURM_CPUS_PER_TASK -o out4 -p model -d calling_model/ -f genome.fa -b in4.bam
Submit this job using the swarm command.
swarm -f pb-cpg-tools.swarm [-g #] [-t #] [--gres=lscratch:#] --module pb-cpg-toolswhere
-g # | Number of Gigabytes of memory required for each process (1 line in the swarm command file) |
-t # | Number of threads/CPUs required for each process (1 line in the swarm command file). |
--gres=lscratch:# | lscratch amount in GB allocated for each process (1 line in the swarm command file). |
--module pb-cpg-tools | Loads the pb-cpg-tools module for each subjob in the swarm |