Biowulf High Performance Computing at the NIH
alphafold2 on Biowulf

From the official documentation

This package provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP14 and published in Nature. For simplicity, we refer to this model as AlphaFold throughout the rest of this document.


  • Jumper et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021. PubMed  |  Journal
2022-04-22: Version 2.2.0 becomes the default
  • See official announcement
  • Switched to new AlphaFold-Multimer models with reduced numbers of clashes on average and slightly increased accuracy.
  • removed --is_prokaryote_list option from run_singularity and msa as the prokaryotic pairing algorithm did not actually improve accuracy on average
  • Added --num_multimer_predictions_per_model=N option to run_singularity. Runs N predictions per multimer model - each with different seeds. Defaults to 5 and will increase runtime
  • Added --config_model option to run_singularity. This allows users to use a customized alphafold/model/ to alphafold for tuning certain parameters
2022-02-22: Version 2.1.2 becomes the default
  • No changes to network weights
  • Relaxation now defaults to using GPU
  • Added options for controlling relaxation: --[no]run_relax and --[no]enable_gpu_relax
  • New script to generate the multiple sequence alignments (MSAs) only (msa). This is highly recommended as MSA generation consumes about 60% of runtime and does not make use of GPU. The workflow is to run msa generation on CPU nodes and then run model prediction on a GPU node with run_singularity --use_precomputed_msas .... This script also supports adding extra options to hhblits or overriding some alphafold defaults with option --hhblits_extra_opts. Note that this script cannot run on x2650 nodes because it depends on an AVX2 hhblits build.
2021-11-15: Version 2.1.1 becomes the default
  • the --preset option was renamed to --model_preset
  • the --db_preset=reduced_dbs option is now supported for all alphafold versions
  • The inofficial --use_ptm option became obsolete with introduction of the official --model_preset option and was removed
  • This version uses the new model parameters (2021-10-27) released with 2.1.1. This includes the parameters for the multimer model. The 2.0.X modules will continue to use the previous parameters.
2021-11-14: Database update
Databases were updated in place: pdb mmcif and pdb70 (211110). New databases only used by multimer model: pdb_seqres, uniprot
2021-10-19: Added --use_ptm option to run_singularity
Use the pTM models, which were fine-tuned to produce pTM (predicted TM-score) and predicted aligned error values alongside their structure predictions.
2021-10-18: Adaptation of the alphafold_advanced notebook from ColabFold available in version 2.0.1.
Allows prediction of protein complexes with unmodified alphafold network weights. So far only an interactive notebook is available. See below for more details
2021-10-01: Version 2.0.0-24-g1d43aaf was tagged as 2.0.1
The modules for 2.0.0-24-g1d43aaf and 2.0.1 point to the same installation since the release was tagged after this revision was installed.
2021-09-21: Version 2.0.0-24-g1d43aaf becomes the default version on biowulf
Most noticable change should be the inclusion of pLDDT in the PDB B-factor column
2021-09-16: Database update (in place)
The following databases used by alphafold were updated in place: mgnify (2018_12 to 2019_05), pdb70 (200401 to 210901), pdb mmcif (210717 to 210915, 1969 additional structures), uniclust30 (2018_08 to 2021_06 from Uniref90 and BFD are unchanged.
Important Notes

Interactive job

Allocate an interactive session and run the program. In this example the whole pipeline including multiple sequence alignment and model predictions are run with run_singularity on a GPU node. This is not the most efficient way of doing it. Starting with version 2.1.2 we make available a separate script that only does the alignment and can be run on CPU nodes reducing GPU time by more than 2-fold.

[user@biowulf]$ sinteractive --mem=60g --cpus-per-task=8 --gres=lscratch:100,gpu:v100x:1
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job

[user@cn3144]$ module load alphafold2/2.2.0

To predict the structure of a protein already in PDB without using its experimental structure as a template set max_template_date to before the release date of the structure. For example, to reproduce the T1049 CASP14 target with 144 aa. On a V100x this prediction runs for about 1h.

[user@cn3144]$ run_singularity --helpfull  # use --help for shorter help message
Singularity launch script for Alphafold.
  --[no]benchmark: Run multiple JAX model evaluations to obtain a timing that excludes the compilation time,
    which should be more indicative of the time required for inferencing many proteins.
    (default: 'false')
  --db_preset: <full_dbs|reduced_dbs>: Choose preset MSA database configuration - smaller genetic
    database config (reduced_dbs) or full genetic database config (full_dbs)
    (default: 'full_dbs')
  --[no]dry_run: Print command that would have been executed and exit.
    (default: 'false')
  --[no]enable_gpu_relax: Run relax on GPU if GPU is enabled.
    (default: 'true')
  --fasta_paths: Paths to FASTA files, each containing a prediction target that will be folded one after
    another. If a FASTA file contains multiple sequences, then it will be folded as a multimer. Paths should
    be separated by commas. All FASTA paths must have a unique basename as the basename is used to name the
    output directories for each prediction. (a comma separated list)
  --gpu_devices: Comma separated list of devices to pass to NVIDIA_VISIBLE_DEVICES.
    (default: 'all')
  --max_template_date: Maximum template release date to consider (ISO-8601 format: YYYY-MM-DD). Important
    if folding historical test sets.
  --model_config: Use this file instead of default alphafold/model/
  --model_preset: <monomer|monomer_casp14|monomer_ptm|multimer>: Choose preset model configuration -
    the monomer model, the monomer model with extra ensembling, monomer model with pTM head, or multimer model
    (default: 'monomer')
  --num_multimer_predictions_per_model: How many predictions (each with a different random seed) will be
    generated per model. E.g. if this is 2 and there are 5 models then there will be 10 predictions per
    input. Note: this FLAG only applies if model_preset=multimer
    (default: '5')
    (an integer)
  --output_dir: Path to a directory that will store the results.
  --[no]run_relax: Whether to run the final relaxation step on the predicted models. Turning relax off might
    result in predictions with distracting stereochemical violations but might help in case you are having
    issues with the relaxation stage.
    (default: 'true')
  --[no]use_gpu: Enable NVIDIA runtime to run with GPUs.
    (default: 'true')
  --[no]use_precomputed_msas: Whether to read MSAs that have been written to disk instead of running the
    MSA tools. The MSA files are looked up in the output directory, so it must stay the same between multiple
    runs that are to reuse the MSAs. WARNING: This will not check if the sequence, database or configuration
    have changed.
    (default: 'false')
  --[no]alsologtostderr: also log to stderr?
    (default: 'false')
  --log_dir: directory to write logfiles into
    (default: '')
  --logger_levels: Specify log level of loggers. The format is a CSV list of `name:level`. Where `name` is the
    logger name used with `logging.getLogger()`, and `level` is a level name  (INFO, DEBUG, etc). e.g.
    (default: '')
  --[no]logtostderr: Should only log to stderr?
    (default: 'false')
  --[no]showprefixforinfo: If False, do not prepend prefix to info messages when it's logged to stderr, --verbosity
    is set to INFO level, and python logging is used.
    (default: 'true')
  --stderrthreshold: log messages at this level, or more severe, to stderr in addition to the logfile.
    Possible values are 'debug', 'info', 'warning', 'error', and 'fatal'.  Obsoletes --alsologtostderr.
    Using --alsologtostderr cancels the effect of this flag. Please also note that this flag is
    subject to --verbosity and requires logfile not be stderr.
    (default: 'fatal')
  -v,--verbosity: Logging verbosity level. Messages logged at this level or lower will be included. Set to 1
    for debug logging. If the flag was not set or supplied, the value will be changed from the default of
    -1 (warning) to 0 (info) after flags are parsed.
    (default: '-1')
    (an integer)

[user@cn3144]$ run_singularity \
    --model_preset=monomer_casp14 \
    --fasta_paths=$ALPHAFOLD2_TEST_DATA/T1049.fasta \
    --max_template_date=2020-05-14 \
[user@cn3144]$ tree T1049
├── [user   1.1M]  features.pkl
├── [user   4.0K]  msas
│   ├── [user    33K]  bfd_uniclust_hits.a3m
│   ├── [user    18K]  mgnify_hits.sto
│   └── [user   121K]  uniref90_hits.sto
├── [user   170K]  ranked_0.pdb               # <-- shown below
├── [user   170K]  ranked_1.pdb
├── [user   170K]  ranked_2.pdb
├── [user   171K]  ranked_3.pdb
├── [user   170K]  ranked_4.pdb
├── [user    330]  ranking_debug.json
├── [user   170K]  relaxed_model_1.pdb
├── [user   170K]  relaxed_model_2.pdb
├── [user   170K]  relaxed_model_3.pdb
├── [user   170K]  relaxed_model_4.pdb
├── [user   171K]  relaxed_model_5.pdb
├── [user    11M]  result_model_1.pkl
├── [user    11M]  result_model_2.pkl
├── [user    11M]  result_model_3.pkl
├── [user    11M]  result_model_4.pkl
├── [user    11M]  result_model_5.pkl
├── [user    771]  timings.json
├── [user    87K]  unrelaxed_model_1.pdb
├── [user    87K]  unrelaxed_model_2.pdb
├── [user    87K]  unrelaxed_model_3.pdb
├── [user    87K]  unrelaxed_model_4.pdb
└── [user    87K]  unrelaxed_model_5.pdb

[user@cn3144]$ exit

The processes prior to model inference on the GPU consumed up to 40 GB of memory for this protein. Memory requirements will vary with different size proteins.

alignment for T1049 predicted and experimental structures
Figure 1. This is the highest confidence prediction (ranked_0.pdb, blue) aligned with the actual structure for this protein (6Y4F, green)

The next example shows how to run a multimer model (available from version 2.1.1). For improved efficiency we pre-generate the multiple sequence alignment on a CPU node using the msa script available since version 2.1.2 on biowulf and then do model prediction only on a GPU node. Note that the separation of MSA generation and model prediction works for monomers and multimers. The example used is a recently published PI3K structure.

[user@biowulf]$ sinteractive --mem=60g --cpus-per-task=8 --gres=lscratch:100
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job

[user@cn3144]$ module load alphafold2/2.2.0
[user@cn3144]$ cat $ALPHAFOLD2_TEST_DATA/pi3k.fa
>sp|P27986|P85A_HUMAN Phosphatidylinositol 3-kinase regulatory subunit alpha OS=Homo sapiens OX=9606 GN=PIK3R1 PE=1 SV=2
>sp|P42336|PK3CA_HUMAN Phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit alpha isoform OS=Homo sapiens OX=9606 GN=PIK3CA PE=1 SV=2

[user@cn3144]$ msa \
    --fasta_paths=$ALPHAFOLD2_TEST_DATA/pi3k.fa \
    --max_template_date=2021-11-01 \
    --model_preset multimer \
[user@cn3144]$ tree pi3k
|-- [user    90M]  features.pkl
`-- [user   4.0K]  msas
    |-- [user   4.0K]  A
    |   |-- [user   5.1M]  bfd_uniclust_hits.a3m
    |   |-- [user   237K]  mgnify_hits.sto
    |   |-- [user    12M]  pdb_hits.sto
    |   |-- [user   276M]  uniprot_hits.sto
    |   `-- [user    42M]  uniref90_hits.sto
    |-- [user   4.0K]  B
    |   |-- [user   4.9M]  bfd_uniclust_hits.a3m
    |   |-- [user   2.0M]  mgnify_hits.sto
    |   |-- [user   4.0M]  pdb_hits.sto
    |   |-- [user   435M]  uniprot_hits.sto
    |   `-- [user    79M]  uniref90_hits.sto
    `-- [user   2.1K]  chain_id_map.json
[user@cn3144]$ exit

Next, model prediction on a GPU using the MSAs computed in the previous step. We will run only 2 copies of each model with different seeds to reduce runtime relative to the (new in 2.2.0) default of 5. Runtime scales approximately linearly with the number of predictions per model.

[user@biowulf]$ sinteractive --mem=40g --cpus-per-task=6 --gres=lscratch:30,gpu:v100x:1 --time=12:00:00
[user@cn3144]$ module load alphafold2/2.2.0
[user@cn3144]$ run_singularity \
    --use_precomputed_msas \
    --fasta_paths=$ALPHAFOLD2_TEST_DATA/pi3k.fa \
    --max_template_date=2021-11-01 \
    --model_preset multimer \
    --num_multimer_predictions_per_model=2 \
[user@cn3144]$ exit
PI3K heterodimer model animation
Figure 2. Best alphafold model for Phosphoinositide 3-kinase alpha (PI3Kα) model obtained in the example above. The two subunits are shown in blue (catalytic subunit, p110) and green (regulatory subunit, p85), respectively and shaded by pLDDT from light (low) to dark (high). Comparision with the Cryo-EM structure (7MYN) showed close agreement and some high confidence predicitons for areas that did not resolve in the published structure.

To get an idea of runtimes of alphafold2 we first ran 4 individual proteins on all our available GPUs. The proteins ranged in size from 144 aa to 622 aa. Note that for all but the smallest protein, K80 GPUs were not suitable and should not be considered for alphafold2. These tests were run with default settings except for a fixed --max_template_date=2021-07-31

alphafold2 benchmark plots
Figure 3. Alphafold runtimes for 4 proteins with 8 CPUs and 60GB of memory. T1049: CASP14 target, 144 aa. D10R: Vaccinia virus WR protein D10R, 248 aa. A11R: Vaccinia virus WR protein A11R, 318 aa. T1036s1: CASP14 target, 622 aa. Note that (1) k80 GPUs are not suitable (2) 2 GPUs for single proteins don't reduce runtime (3) p100s are about as good as the more modern GPUs for these examples (4) Runtimes were quite variable. (5) The degree to which the 8 CPUs were overloaded depended on protein size and overloading appeared to be most severe during the relaxation phase.

The runtime to run all 4 protein on a V100x GPU with 8 CPUs and 60GB of memory was 3.2h, slightly less than the individual runtimes of the 4 proteins run separately. For this one job we also increased the number of CPUs to 16 or the number of GPUs to 2, neither of which appeared to shorted the runtime

The resource usage profile of the combined alphafold2 pipeline in our testing thus far is suboptimal and variable. Steps probably should be segregated into individual jobs with proper resources. We hope to optimize this in the future

Batch job

Note: when running multiple alphafold predictions please use the msa script available for alphafold >=2.1.2 to precompute the multiple sequence alignments on CPU nodes and use GPU only for model predictions. This is shown in the example below as 2 batch jobs.

module load alphafold2/2.1.2
msa \
    --model_preset=monomer \
    --fasta_paths=$ALPHAFOLD2_TEST_DATA/T1049.fasta \
    --max_template_date=2020-05-14 \

Submit this job using the Slurm sbatch command.

jobid=$(sbatch --cpus-per-task=8 --mem=60g --gres=lscratch:30
module load alphafold2/2.1.2
run_singularity \
    --use_precomputed_msas \
    --model_preset=monomer \
    --fasta_paths=$ALPHAFOLD2_TEST_DATA/T1049.fasta \
    --max_template_date=2020-05-14 \

Submit this job using the Slurm sbatch command.

sbatch --cpus-per-task=6 --partition=gpu --mem=40g --gres=gpu:v100x:1,lscratch:30 --dependency=afterany:$jobid
ColabFold alphafold2_advanced notebook

This is an adaptation of the Alphafold2_advanced Colabfold notebook to biowulf. Currently only the notebook is available but a batch wrapper is in development.


The jupyter function will use an existing tunnel if it has been set up with the --tunnel option to sinteractive. If there is no pre-existing tunnel, it will attempt to set one up itself. That means it is possible to start the jupyter server in a batch job and obtain the command to set up the tunnel from your computer to the login node from the batch output. See our tunneling documentation for more information

Example use

[user@biowulf ~]$ sinteractive --gres=lscratch:20,gpu:v100x:1 -c16 --mem=60g --tunnel
salloc.exe: Pending job allocation 25316671
salloc.exe: job 25316671 queued and waiting for resources
salloc.exe: job 25316671 has been allocated resources
salloc.exe: Granted job allocation 25316671
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3299 are ready for job
srun: error: x11: no local DISPLAY defined, skipping
error: unable to open file /tmp/slurm-spank-x11.25316671.0
slurmstepd: error: x11: unable to read DISPLAY value

Created 1 generic SSH tunnel(s) from this compute node to
biowulf for your use at port numbers defined
in the $PORTn ($PORT1, ...) environment variables.

Please create a SSH tunnel from your workstation to these ports on biowulf.
On Linux/MacOS, open a terminal and run:

    ssh  -L 46243:localhost:46243

    For Windows instructions, see

[user@cn3144]$ module load alphafold2/2.0.1
[user@cn3144]$ af_colabfold_advanced jupyter name_of_notebook_copy.ipynb

The af_colabfold_advanced script creates a copy of the notebook in your current directory that you can work with and starts the jupyter server up. Once the provided ssh command is used to establish a tunnel to the login node exactly as when using regular jupyter notebooks, the notebook can be visited at the address provided by jupyter during startup