alphafold2 on Biowulf

From the official documentation

This package provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP14 and published in Nature. For simplicity, we refer to this model as AlphaFold throughout the rest of this document.


2023-02-13: alphafold 2.3.1 becomes the default version.
2023-02-10: alphafold 2.3.1 available.
Some notable changes (See release page for full details):
2023-02-09: In place database update. This will apply to all alphafold version with the exception of the parameters which are version specific.
2022-09-21: the --hhblits_extra_opts option was ported from msa to run_singularity
In a small number of cases hhblits fails to create alignments. This option can be used to fine tune the hhblits run (see below). Example: run_singularity --hhblits_extra_opts="-maxres 80000 -prepre_smax_thresh 50" ...
2022-07-11: the msa utility script has been disabled
Large scale use of the msa script may have been implicated in file system problems. The script has been removed until futher notice.
2022-06-02: added alphapickle to alphafold 2.2.0.
alphapickle will be included in alphafold installs ≥ 2.2.0
2022-04-22: Version 2.2.0 becomes the default
2022-02-22: Version 2.1.2 becomes the default
2021-11-15: Version 2.1.1 becomes the default
2021-11-14: Database update
Databases were updated in place: pdb mmcif and pdb70 (211110). New databases only used by multimer model: pdb_seqres, uniprot
2021-10-19: Added --use_ptm option to run_singularity
Use the pTM models, which were fine-tuned to produce pTM (predicted TM-score) and predicted aligned error values alongside their structure predictions.
2021-10-18: Adaptation of the alphafold_advanced notebook from ColabFold available in version 2.0.1.
Allows prediction of protein complexes with unmodified alphafold network weights. So far only an interactive notebook is available. See below for more details
2021-10-01: Version 2.0.0-24-g1d43aaf was tagged as 2.0.1
The modules for 2.0.0-24-g1d43aaf and 2.0.1 point to the same installation since the release was tagged after this revision was installed.
2021-09-21: Version 2.0.0-24-g1d43aaf becomes the default version on biowulf
Most noticable change should be the inclusion of pLDDT in the PDB B-factor column
2021-09-16: Database update (in place)
The following databases used by alphafold were updated in place: mgnify (2018_12 to 2019_05), pdb70 (200401 to 210901), pdb mmcif (210717 to 210915, 1969 additional structures), uniclust30 (2018_08 to 2021_06 from Uniref90 and BFD are unchanged.
Important Notes

Interactive job

Allocate an interactive session and run the program. In this example the whole pipeline including multiple sequence alignment and model predictions are run with run_singularity on a GPU node.

[user@biowulf]$ sinteractive --mem=60g --cpus-per-task=8 --gres=lscratch:100,gpu:v100x:1
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job

[user@cn3144]$ module load alphafold2/2.2.0

To predict the structure of a protein already in PDB without using its experimental structure as a template set max_template_date to before the release date of the structure. For example, to reproduce the T1049 CASP14 target with 144 aa. On a V100x this prediction runs for about 1h.

[user@cn3144]$ run_singularity --helpfull  # use --help for shorter help message
Singularity launch script for Alphafold.
  --[no]benchmark: Run multiple JAX model evaluations to obtain a timing that excludes the compilation time,
    which should be more indicative of the time required for inferencing many proteins.
    (default: 'false')
  --db_preset: <full_dbs|reduced_dbs>: Choose preset MSA database configuration - smaller genetic
    database config (reduced_dbs) or full genetic database config (full_dbs)
    (default: 'full_dbs')
  --[no]dry_run: Print command that would have been executed and exit.
    (default: 'false')
  --[no]enable_gpu_relax: Run relax on GPU if GPU is enabled.
    (default: 'true')
  --fasta_paths: Paths to FASTA files, each containing a prediction target that will be folded one after
    another. If a FASTA file contains multiple sequences, then it will be folded as a multimer. Paths should
    be separated by commas. All FASTA paths must have a unique basename as the basename is used to name the
    output directories for each prediction. (a comma separated list)
  --gpu_devices: Comma separated list of devices to pass to NVIDIA_VISIBLE_DEVICES.
    (default: 'all')
  --max_template_date: Maximum template release date to consider (ISO-8601 format: YYYY-MM-DD). Important
    if folding historical test sets.
  --model_config: Use this file instead of default alphafold/model/
  --model_preset: <monomer|monomer_casp14|monomer_ptm|multimer>: Choose preset model configuration -
    the monomer model, the monomer model with extra ensembling, monomer model with pTM head, or multimer model
    (default: 'monomer')
  --num_multimer_predictions_per_model: How many predictions (each with a different random seed) will be
    generated per model. E.g. if this is 2 and there are 5 models then there will be 10 predictions per
    input. Note: this FLAG only applies if model_preset=multimer
    (default: '5')
    (an integer)
  --output_dir: Path to a directory that will store the results.
  --[no]run_relax: Whether to run the final relaxation step on the predicted models. Turning relax off might
    result in predictions with distracting stereochemical violations but might help in case you are having
    issues with the relaxation stage.
    (default: 'true')
  --[no]use_gpu: Enable NVIDIA runtime to run with GPUs.
    (default: 'true')
  --[no]use_precomputed_msas: Whether to read MSAs that have been written to disk instead of running the
    MSA tools. The MSA files are looked up in the output directory, so it must stay the same between multiple
    runs that are to reuse the MSAs. WARNING: This will not check if the sequence, database or configuration
    have changed.
    (default: 'false')
  --[no]alsologtostderr: also log to stderr?
    (default: 'false')
  --log_dir: directory to write logfiles into
    (default: '')
  --logger_levels: Specify log level of loggers. The format is a CSV list of `name:level`. Where `name` is the
    logger name used with `logging.getLogger()`, and `level` is a level name  (INFO, DEBUG, etc). e.g.
    (default: '')
  --[no]logtostderr: Should only log to stderr?
    (default: 'false')
  --[no]showprefixforinfo: If False, do not prepend prefix to info messages when it's logged to stderr, --verbosity
    is set to INFO level, and python logging is used.
    (default: 'true')
  --stderrthreshold: log messages at this level, or more severe, to stderr in addition to the logfile.
    Possible values are 'debug', 'info', 'warning', 'error', and 'fatal'.  Obsoletes --alsologtostderr.
    Using --alsologtostderr cancels the effect of this flag. Please also note that this flag is
    subject to --verbosity and requires logfile not be stderr.
    (default: 'fatal')
  -v,--verbosity: Logging verbosity level. Messages logged at this level or lower will be included. Set to 1
    for debug logging. If the flag was not set or supplied, the value will be changed from the default of
    -1 (warning) to 0 (info) after flags are parsed.
    (default: '-1')
    (an integer)

[user@cn3144]$ run_singularity \
    --model_preset=monomer_casp14 \
    --fasta_paths=$ALPHAFOLD2_TEST_DATA/T1049.fasta \
    --max_template_date=2020-05-14 \
[user@cn3144]$ tree T1049
├── [user   1.1M]  features.pkl
├── [user   4.0K]  msas
│   ├── [user    33K]  bfd_uniclust_hits.a3m
│   ├── [user    18K]  mgnify_hits.sto
│   └── [user   121K]  uniref90_hits.sto
├── [user   170K]  ranked_0.pdb               # <-- shown below
├── [user   170K]  ranked_1.pdb
├── [user   170K]  ranked_2.pdb
├── [user   171K]  ranked_3.pdb
├── [user   170K]  ranked_4.pdb
├── [user    330]  ranking_debug.json
├── [user   170K]  relaxed_model_1.pdb
├── [user   170K]  relaxed_model_2.pdb
├── [user   170K]  relaxed_model_3.pdb
├── [user   170K]  relaxed_model_4.pdb
├── [user   171K]  relaxed_model_5.pdb
├── [user    11M]  result_model_1.pkl
├── [user    11M]  result_model_2.pkl
├── [user    11M]  result_model_3.pkl
├── [user    11M]  result_model_4.pkl
├── [user    11M]  result_model_5.pkl
├── [user    771]  timings.json
├── [user    87K]  unrelaxed_model_1.pdb
├── [user    87K]  unrelaxed_model_2.pdb
├── [user    87K]  unrelaxed_model_3.pdb
├── [user    87K]  unrelaxed_model_4.pdb
└── [user    87K]  unrelaxed_model_5.pdb

The processes prior to model inference on the GPU consumed up to 40 GB of memory for this protein. Memory requirements will vary with different size proteins.

alignment for T1049 predicted and experimental structures
Figure 1. This is the highest confidence prediction (ranked_0.pdb, blue) aligned with the actual structure for this protein (6Y4F, green)

Note that the model .pkl files which, unlike the .pdb files, are not re-ordered into ranked_ files contain a lot of information about the models. These are python pickle files and python can be used to explore and visualize them. For example:

[user@cn3144]$ conda activate my_py39 # needs jupyter and the packages imported below
[user@cn3144]$ cd T1049
[user@cn3144]$ jupyter console
In [1]: import pickle
In [2]: import json
In [3]: import pprint
In [4]: import jax
In [5]: pprint.pprint(json.load(open("ranking_debug.json", encoding="ascii")))
{'order': ['model_2_pred_0',
 'plddts': {'model_1_pred_0': 88.44386138278787,
            'model_2_pred_0': 91.83564104655056,
            'model_3_pred_0': 88.49961929441032,
            'model_4_pred_0': 86.73066329994059,
            'model_5_pred_0': 87.4009420322368}}
### so model 2 is the best model in this run and corresponds to ranked_0.pdf
In [6]: best_model = pickle.load(open("result_model_2_pred_0.pkl", "rb"))
In [7]: list(best_model.keys())
In [8]: best_model['plddt'].shape
Out[8]: (141,)

The predicted alignment error (PAE) is only produced by the monomer_ptm and multimer models. Since version 2.2.0 we also include alphapickle with alphafold to create plots, csv files, and chimera attribute files for each ranked model. By default output will be saved to the same folder. See -h for more options.

[user@cn3144]$ alphapickle -od T1049

If the model above was created with the monomer_ptm model the following two plots are generated for each model:

The next example shows how to run a multimer model (available from version 2.1.1). The example used is a recently published PI3K structure.

[user@cn3144]$ cat $ALPHAFOLD2_TEST_DATA/pi3k.fa
>sp|P27986|P85A_HUMAN Phosphatidylinositol 3-kinase regulatory subunit alpha OS=Homo sapiens OX=9606 GN=PIK3R1 PE=1 SV=2
>sp|P42336|PK3CA_HUMAN Phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit alpha isoform OS=Homo sapiens OX=9606 GN=PIK3CA PE=1 SV=2

[user@cn3144]$ run_singularity \
    --fasta_paths=$ALPHAFOLD2_TEST_DATA/pi3k.fa \
    --max_template_date=2021-11-01 \
    --model_preset multimer \
    --num_multimer_predictions_per_model=2 \
[user@cn3144]$ exit
PI3K heterodimer model animation
Figure 2. Best alphafold model for Phosphoinositide 3-kinase alpha (PI3Kα) model obtained in the example above. The two subunits are shown in blue (catalytic subunit, p110) and green (regulatory subunit, p85), respectively and shaded by pLDDT from light (low) to dark (high). Comparision with the Cryo-EM structure (7MYN) showed close agreement and some high confidence predicitons for areas that did not resolve in the published structure.

To get an idea of runtimes of alphafold2 we first ran 4 individual proteins on all our available GPUs. The proteins ranged in size from 144 aa to 622 aa. Note that for all but the smallest protein, K80 GPUs were not suitable and should not be considered for alphafold2. These tests were run with default settings except for a fixed --max_template_date=2021-07-31

alphafold2 benchmark plots
Figure 3. Alphafold runtimes for 4 proteins with 8 CPUs and 60GB of memory. T1049: CASP14 target, 144 aa. D10R: Vaccinia virus WR protein D10R, 248 aa. A11R: Vaccinia virus WR protein A11R, 318 aa. T1036s1: CASP14 target, 622 aa. Note that (1) k80 GPUs are not suitable (2) 2 GPUs for single proteins don't reduce runtime (3) p100s are about as good as the more modern GPUs for these examples (4) Runtimes were quite variable. (5) The degree to which the 8 CPUs were overloaded depended on protein size and overloading appeared to be most severe during the relaxation phase.

The runtime to run all 4 protein on a V100x GPU with 8 CPUs and 60GB of memory was 3.2h, slightly less than the individual runtimes of the 4 proteins run separately. For this one job we also increased the number of CPUs to 16 or the number of GPUs to 2, neither of which appeared to shorted the runtime

The resource usage profile of the combined alphafold2 pipeline in our testing thus far is suboptimal and variable. Steps probably should be segregated into individual jobs with proper resources. We hope to optimize this in the future

Batch job

Note: when running multiple alphafold predictions please use the msa script available for alphafold >=2.1.2 to precompute the multiple sequence alignments on CPU nodes and use GPU only for model predictions. This is shown in the example below as 2 batch jobs.

module load alphafold2/2.2.0
run_singularity \
    --model_preset=monomer \
    --fasta_paths=$ALPHAFOLD2_TEST_DATA/T1049.fasta \
    --max_template_date=2020-05-14 \

Submit this job using the Slurm sbatch command.

sbatch --cpus-per-task=6 --partition=gpu --mem=40g --gres=gpu:v100x:1,lscratch:100
ColabFold alphafold2_advanced notebook

This is an adaptation of the Alphafold2_advanced Colabfold notebook to biowulf. Currently only the notebook is available but a batch wrapper is in development.


The jupyter function will use an existing tunnel if it has been set up with the --tunnel option to sinteractive. If there is no pre-existing tunnel, it will attempt to set one up itself. That means it is possible to start the jupyter server in a batch job and obtain the command to set up the tunnel from your computer to the login node from the batch output. See our tunneling documentation for more information

Example use

[user@biowulf ~]$ sinteractive --gres=lscratch:20,gpu:v100x:1 -c16 --mem=60g --tunnel
salloc.exe: Pending job allocation 25316671
salloc.exe: job 25316671 queued and waiting for resources
salloc.exe: job 25316671 has been allocated resources
salloc.exe: Granted job allocation 25316671
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3299 are ready for job
srun: error: x11: no local DISPLAY defined, skipping
error: unable to open file /tmp/slurm-spank-x11.25316671.0
slurmstepd: error: x11: unable to read DISPLAY value

Created 1 generic SSH tunnel(s) from this compute node to
biowulf for your use at port numbers defined
in the $PORTn ($PORT1, ...) environment variables.

Please create a SSH tunnel from your workstation to these ports on biowulf.
On Linux/MacOS, open a terminal and run:

    ssh  -L 46243:localhost:46243

    For Windows instructions, see

[user@cn3144]$ module load alphafold2/2.0.1
[user@cn3144]$ af_colabfold_advanced jupyter name_of_notebook_copy.ipynb

The af_colabfold_advanced script creates a copy of the notebook in your current directory that you can work with and starts the jupyter server up. Once the provided ssh command is used to establish a tunnel to the login node exactly as when using regular jupyter notebooks, the notebook can be visited at the address provided by jupyter during startup