R/Bioconductor on Biowulf
R logo

R is a language and environment for statistical computing and graphics. It can be considered an open source decendant of the S language which was developed by Chambers and colleagues at Bell Laboratories in the 1970s.

R is highly extensible and provides a wide variety of modern statistical analysis methods combined with excellent graphical visualization capabilities embedded in a programming language that supports procedural, functuional, and object oriented programming styles. R natively provides operators for calculations on arrays and matrices.

While individual single threaded R code is not expected to run any faster in an interactive session on a compute node than it would run on a modern desktop, Biowulf allows users to run many R jobs concurrently or take advantage of the speedup provided by parallelizing R code to a much greater degree than possible on a single workstation.

On biowulf, R modules are available for the minor releases (e.g. 4.2) which will contain the newest patch level releases (e.g. 4.2.3).

Changelog
top
Jan 2024: R/4.3.2 becomes the default R installation.
June 23 2023: default location for $R_LIBS_USER
changed to /data/$USER/R/rhel8/%v with migration to RHEL8
May 2023: R/4.3.0 becomes the default R installation.
User visible changes: See R NEWS for full details.
Nov 2022: R/4.2.2 becomes the default R installation.
Jul 2022: R/4.2.0 becomes the default R installation.
Some notable changes:
Jul 2021: R/4.1.0 becomes the default R installation.
Apr 2021: R/4.0.5 becomes the default R installation. OpenMPI is now version 4
Nov 2020: R/4.0.3 becomes the default R installation
Jun 2020: R/4.0.0 becomes the default R installation
For details see R NEWS. Notable changes: As usual, many packages are pre-installed and private packages need to be re-installed.
Jun 2020: R/4.0.0 becomes the default R installation
Apr 2020: R/3.6.3 becomes the default R installation
Dec 2019: R/3.6.1 becomes the default R installation and R is now compiled with gcc 9.2.0
Jun 2019: R/3.6.0 becomes the default R installation
Feb 2019: R/3.5.2 becomes the default R installation
Jun 2018: Cluster update from RHEL6 to RHEL7
Common pitfalls
top
Implicit multithreading
R can make use of implicit multithreading via two different mechanisms. One of them is regulated by the OMP_NUM_THREADS or MKL_NUM_THREADS environment variables which are set to 1 by the R wrappers because leaving this variable unset can lead to R using as many threads as there are CPUs on a compute node thus overloading jobs. If you know your code can make effective use of those threads you can explicitly set OMP_NUM_THREADS to greater than 1 after loading the module. However, only a subset of code will be able to take advantage of this - don't expect an automatic speed increase.
parallel::detectCores() always detects all CPUs on a node
R using one of the parallel packages (parallel, doParallel, ...) often overload their job allocation because they are using the detectCores() function from the parallel package to determine how many worker processes to use. However, this function returns the number of physical CPUs on a compute node irrespective of how many have been allocated to a job. Therefore, if not all CPUs are allocated to a job the job will be overloaded and perform poorly. See the section on the parallel package for more detail.
BiocParallel by default tries to use most CPUs on a node
BiocParallel is not aware of slurm and by default tries to use most of the CPUs on a node irrespetive of the slurm allocation. This can lead to overloaded jobs. See the section on the BiocParallel package for more information on how to avoid this.
Poor scaling of parallel code
Don't assume that you should allocate as many CPUs as possible to a parallel workload. Parallel efficiency often drops and in some cases allocating more CPUs may actually extend runtimes. If you use/implement parallel algorithms please measure scaling before submitting large numbers of such jobs.
Can't install packages in my private library
R will attempt to install packages to /data/R/rhel8/%v (RHEL8) where %v is the two digit version of R (e.g. 4.1) or in the path set by $R_LIBS_USER. However, R won't always automatically create that directory and in its absence will try to install to the central packge library which will fail. If you encounter installation failures please make sure the library directory for your version of R exists.
AnnotationHub or ExperimentHub error No internet connection
The AnnotationHub and ExperimentHub packages and packages depending on them need to connect to the internet via a proxy. When using AnnotationHub or ExperimentHub directly, a proxy can be specified explicitly in the call to set up the Hub. However, if they are used indirectly that is not possible. Instead, define the proxy either using environment variables EXPERIMENT_HUB_PROXY/ANNOTATION_HUB_PROXY or by setting options in R with setAnnotationHubOption("PROXY", Sys.getenv("http_proxy")) or the corresponding setExperimentHubOption function.
Updating broken packages installed in home directory
When R and/or the centrally installed R packages are updated, packages installed in your private library may break or result in other packages not loading. The most common error results from a locally installed rlang package. Look for errors that include your private library path in the errors. All locally installed package can be updated with
> my.lib <- .libPaths()[1]  # check first that .libPaths()[1] is indeed the path to your library
> my.pkgs <- list.files(my.lib)
> library(pacman)
> p_install(pkgs, character.only=T, lib=my.lib)
An easy way to fix this error is also to delete the locally installed rlang with
$ rm -rf ~/R/4.2/library/rlang  # replace 4.2 with the R major.minor version you are using
        
Re-install packages to 1) data directory from home directory and/or 2) different R version
Since R_LIBS_USER was relocated to data directory, the R packages installed in your private library (at ~/R/) need to be reinstalled. And/or if you need to update to a newer R version by reinstalling all the packages from an older version of R. Let's create the private library directory for the new version of R first (e.g. R/4.3):
$ mkdir -p /data/$USER/R/rhel8/4.3/
        
For example, for packages installed under R/4.2, you can re-install them by creating a list of installed libraries, find the ones that are not yet installed under data directory or not the same R version you are using, then re-install them with (please replace apptest with your username):
> packages<-installed.packages(lib.loc="/home/apptest/R/4.2/library")[,"Package"]
> toInstall<-setdiff(packages,installed.packages(loc.lib="/data/apptest/R/rhel8/4.3/")[,"Package"])
> BiocManager::install(toInstall)

R will automatically use lscratch for temporary files if it has been allocated. Therefore we highly recommend users always allocate a minimal amount of lscratch of 1GB plus whatever lscratch storage is required by your code.

Interactive R
top

Allocate an interactive session for interactive R work. Note that R sessions are not allowed on the login node nor helix.

[user@biowulf]$ sinteractive --gres=lscratch:5
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job

There may be multiple versions of R available. An easy way of selecting the version is to use modules.To see the modules available, type

[user@cn3144 ~]$ module -r avail '^R$'

--------------- /usr/local/lmod/modulefiles ------------------
R/3.4    R/3.4.3    R/3.4.4    R/3.5 (D)    R/3.5.0    R/3.5.2

Set up your environment and start up an R session

[user@cn3144 ~]$ module load R/3.5
[user@cn3144 ~]$ R
R version 3.5.0 (2018-04-23) -- "Joy in Playing"
Copyright (C) 2018 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(tidyverse)
── Attaching packages ─────────────────────────────────────── tidyverse 1.2.1 ──
✔ ggplot2 2.2.1     ✔ purrr   0.2.4
✔ tibble  1.4.2     ✔ dplyr   0.7.4
✔ tidyr   0.8.0     ✔ stringr 1.3.0
✔ readr   1.1.1     ✔ forcats 0.3.0
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
> [...lots of work...]
> q()
Save workspace image? [y/n/c]: n

A rudimentary graphical interface is available if the sinteractive session was started from a session with X11 forwarding enabled:

[user@cn3144 ~]$ R --gui=Tk

However, RStudio is a much better interface with many advanced features.

Don't forget to exit the interactive session

[user@cn3144 ~]$ exit
salloc.exe: Relinquishing job allocation 46116226
[user@biowulf ~]$
Installed packages
top

Packages installed in the current default R environment

PackageVersion
spatial7.3-17
viridisLite0.4.2
urltools1.7.3
trapezoid2.0-2
plotly4.10.4
oligoClasses1.64.0
numDeriv2016.8-1.1
methylumi2.48.0
mcmc0.9-8
matrixcalc1.0-6
latticeExtra0.6-30
kdecopula0.9.2
ipred0.9-14
GWASExactHW1.2
groHMM1.36.0
GOSemSim2.28.1
geosphere1.5-18
DESeq21.42.1
DEoptimR1.1-3
cn.mops1.48.0
brio1.1.4
AnnotationForge1.44.0
TxDb.Celegans.UCSC.ce6.ensGene3.2.2
strucchange1.5-3
stabledist0.7-1
shadowtext0.1.3
rmarkdown2.26
RcppTOML0.2.2
ProtGenerics1.34.0
PMCMR4.4
MultiAssayExperiment1.28.0
msm1.7.1
MLEcens0.1-7
magic1.6-1
JavaGD0.6-5
ENmix1.38.01
distillery1.2-1
cpp110.4.7
conflicted1.2.0
caTools1.18.2
bsplus0.1.4
blob1.2.4
ALL1.44.0
ADGofTest0.3
statmod1.5.0
rcmdcheck1.4.0
ks1.14.2
keras2.15.0
httr1.4.7
Heatplus3.10.0
europepmc0.4.3
data.table1.15.4
Cubist0.4.2.1
CNEr1.38.0
clusterRepro0.9
bibtex0.5.1
xfun0.43
unisensR0.3.3
spam2.10-0
slingshot2.10.0
NOISeq2.46.0
Lmoments1.3-1
interval1.1-1.0
gfonts0.2.0
DBI1.2.2
ChIPpeakAnno3.36.1
bindata0.9-20
beadarray2.52.0
RRPP2.0.0
Rfast2.1.0
PhysicalActivity0.2-4
pheatmap1.0.12
MplusAutomation1.1.1
lifecycle1.0.4
lars1.3
Haplin7.3.1
DaMiRseq2.14.0
commonmark1.9.1
ade41.7-22
parallel4.3.2
tfdatasets2.9.0
remotes2.5.0
plyr1.8.9
org.Hs.eg.db3.18.0
mize0.2.4
kpeaks1.1.0
flux0.3-0.1
DAAG1.25.4
bindr0.1.1
Bhat0.9-12
vipor0.4.7
tximeta1.20.3
tkWidgets1.80.0
sciplot1.2-0
refund0.1-35
reactome.db1.86.2
MuMIn1.47.5
libcoin1.0-10
cn.farms1.50.0
cli3.6.2
BiocSingular1.18.0
bayestestR0.13.2
slider0.3.1
shinydashboard0.7.2
sampling2.10
RMTstat0.3.1
raster3.6-26
pls2.8-3
miniUI0.1.1.1
gsubfn0.7
grr0.9.5
GOstats2.68.0
GENIE31.24.0
gdata3.0.0
future.apply1.11.2
edgeR4.0.16
dfoptim2023.1.0
ChIPseeker1.38.0
ccaPP0.3.3
class7.3-22
tensorA0.36.2.1
spdep1.3-3
RNifti1.6.1
pROC1.18.5
PBSmodelling2.69.3
partykit1.2-20
mstate0.3.2
knn.covertree1.0
GPArotation2024.3-1
ggformula0.12.0
extraDistr1.10.0
DMRcatedata2.20.3
cmprsk2.2-11
batchelor1.18.1
batch1.1-5
additivityTests1.1-4.1
svd0.5.5
snow0.4-4
Publish2023.01.17
plotmo3.6.3
plgem1.74.0
parsedate1.3.1
parathyroidSE1.40.0
manhattanly0.3.0
DRIMSeq1.30.0
diffusionMap1.2.0
bit4.0.5
zlibbioc1.48.2
rvinecopulib0.6.3.1.1
qvcalc1.0.3
pingr2.0.3
pbdMPI0.5-1
oro.nifti0.11.4
mixsqp0.3-54
MAGeCKFlute2.6.0
e10711.7-14
ashr2.2-63
arsenal3.6.3
TeachingDemos2.13
R.matlab3.7.0
rle0.9.2
R2WinBUGS2.1-22.1
markdown1.12
ggm2.5.1
fansi1.0.6
clue0.3-65
timeSeries4032.109
sva3.50.0
SummarizedExperiment1.32.0
pec2023.04.12
MatchIt4.5.5
ICsurv1.0.1
graph1.80.0
ggthemes5.1.0
ggdendro0.2.0
ggalluvial0.12.5
genetics1.3.8.1.3
fontBitstreamVera0.1.1
DNAcopy1.76.0
discretization1.0-1.1
colorRamps2.3.4
ATACseqQC1.26.0
tiff0.1-12
survJamda.data1.0.2
Rmpfr0.9-5
rlecuyer0.3-8
rafalib1.0.0
pd.hugene.2.0.st3.14.1
mygene1.38.0
msa1.34.0
geonames0.999
FSelector0.34
fracdiff1.5-3
ELMER.data2.26.0
dnet1.1.7
CODEX1.34.0
basilisk.utils1.14.1
KernSmooth2.23-22
wateRmelon2.8.0
Surrogate3.2.5
ROTS1.30.0
questionr0.7.8
pkgload1.3.4
pkgbuild1.4.4
optigrab0.9.2.1
logcondens2.1.8
JADE2.0-4
inaparc1.2.0
ids1.0.1
gsl2.1-8
ggtree3.10.1
fdrtool1.2.17
blockmodeling1.1.5
utils4.3.2
graphics4.3.2
zCompositions1.5.0-3
vioplot0.4.0
rbenchmark1.0.0
moments0.14.1
locfit1.5-9.9
inline0.3.19
idr1.3
hgu133plus2probe2.18.0
gld2.6.6
GGIRread1.0.0
enrichR3.2
dropbead0.3.1
CCA1.2.2
MASS7.3-60
grDevices4.3.2
xopen1.0.0
tidyselect1.2.1
seqLogo1.68.0
rvest1.0.4
robustbase0.99-2
rjson0.2.21
qs0.26.1
praise1.0.0
KMsurv0.1-5
INPower1.38.0
HilbertVis1.60.0
hexView0.3-4
geometry0.4.7
EBSeqHMM1.35.0
DMRcate2.16.1
corpcor1.6.10
spData2.3.0
rotl3.1.0
rex1.2.1
RCircos1.2.2
qpdf1.3.3
psych2.4.3
progress1.2.3
PBSmapping2.73.4
mr.raps0.2
miscTools0.6-28
MassSpecWavelet1.68.0
LSD4.1-0
gss2.2-7
batchtools0.9.17
airway1.22.0
survival3.5-7
TH.data1.1-2
SPAtest3.1.2
randtoolbox2.0.4
ncbit2013.03.29.1
hexbin1.28.3
dagitty0.3-4
bsseq1.38.0
bslib0.7.0
apeglm1.24.0
spatstat.random3.2-3
ReactomePA1.46.0
pctGCdata0.3.0
outliers0.15
mlmRev1.0-8
inum1.0-5
illuminaio0.44.0
hopach2.62.0
exomeCopy1.48.0
densityClust0.3.3
bit644.0.5
annaffy1.74.0
tidygraph1.3.1
rpf1.0.14
rGADEM2.50.0
plier1.72.0
httr21.0.1
git2r0.33.0
gargle1.5.2
gage2.52.0
EnhancedVolcano1.20.0
dqrng0.3.2
ada2.0-5
threejs0.3.3
speedglm0.3-5
spacetime1.3-1
Ringo1.66.0
ncdf41.22
MEGENA1.3.7
ensemblVEP1.44.0
DropletUtils1.22.0
aroma.core3.3.1
snpStats1.52.0
semTools0.5-6
rticles0.27
riskRegression2023.12.21
ResidualMatrix1.12.0
R.cache0.16.0
pathview1.42.0
optmatch0.10.7
motifmatchr1.24.0
maSigPro1.74.0
ipw1.2.1
DDRTree0.1.5
cqn1.48.0
contrast0.24.2
AUCell1.24.0
aggregation1.0.1
VGAM1.1-10
vcd1.4-12
styler1.10.3
R.rsp0.46.0
Rook1.2
pwr1.3-0
prediction0.3.17
MotifDb1.44.0
MAST1.28.0
mapproj1.2.11
lsmeans2.30-0
Kendall2.2.1
HDInterval0.2.4
googlesheets41.1.1
golubEsets1.44.0
glmnet4.1-8
futile.logger1.4.3
boot1.3-28.1
webdriver1.0.6
tidybayes3.0.6
RCurl1.98-1.14
randtests1.0.2
mipfp3.2.1
HTqPCR1.56.0
haven2.5.4
ComplexHeatmap2.18.0
tinytex0.50
spatstat.utils3.0-4
plm2.6-4
pd.hg.u133.plus.23.12.0
pbkrtest0.5.2
PADOG1.44.0
iterators1.0.14
ica1.0-3
hugene20sttranscriptcluster.db8.8.0
gson0.1.0
ggfortify0.4.17
GetoptLong1.0.5
dichromat2.0-0.1
warp0.2.1
rJava1.0-11
pbdZMQ0.3-11
pasilla1.30.0
openssl2.1.2
HTMLUtils0.1.9
Gviz1.46.1
ggbeeswarm0.7.2
deconstructSigs1.8.0
csaw1.36.1
config0.3.2
colourpicker1.3.0
brglm0.7.2
ballgown2.34.0
datasets4.3.2
remaCor0.0.18
pdftools3.4.0
isva1.9
isotone1.1-1
filelock1.0.3
ChIPsim1.56.0
bio3d2.4-4
base64enc0.1-3
awsMethods1.1-1
akima0.6-3.4
signal1.8-0
RWekajars3.9.3-2
rngtools1.5.2
RMySQL0.10.27
mvQuad1.0-8
mouse4302.db3.13.0
marray1.80.0
lobstr1.1.2
fontawesome0.5.2
deldir2.0-4
ape5.8
whisker0.4.1
variancePartition1.32.5
tweenr2.0.3
SeqVarTools1.40.0
ROSE0.0-4
random0.2.6
profvis0.3.8
prettyunits1.2.0
PMCMRplus1.9.10
openxlsx4.2.5.2
MatrixEQTL2.3
hunspell3.0.3
hthgu133acdf2.18.0
gridtext0.1.5
ggraph2.2.1
gamlss.data6.0-6
flashClust1.01-2
downloader0.4
deepSNV1.48.0
TitanCNA1.40.0
sqldf0.4-11
seriation1.5.5
rstan2.32.6
loo2.7.0
ggdist3.3.2
dr3.0.10
DiceKriging1.6.0
testthat3.2.1.1
safe3.42.0
rslurm0.6.2
readstata130.10.1
pryr0.1.6
phyloseq1.46.0
optimx2023-10.21
modelr0.1.11
mathjaxr1.6-0
hu6800probe2.18.0
diffHic1.34.0
cosinor20.2.1
coin1.4-3
binom1.1-1.1
tzdb0.4.0
RUnit0.4.33
rrcov1.7-5
RPMM1.25
rematch22.1.2
R62.5.1
qvalue2.34.0
multidplyr0.1.3
laeken0.5.3
invgamma1.1
ijtiff2.3.4
hdrcde3.4
expint0.1-8
dir.expiry1.10.0
DiffBind3.12.0
DEXSeq1.48.0
DescTools0.99.54
cowplot1.1.3
base64url1.4
ars0.7
stats4.3.2
wheatmap0.2.0
uuid1.2-0
uroot2.1-3
TMB1.9.11
rhdf52.46.1
nloptr2.0.3
globaltest5.56.0
dunn.test1.3.6
DO.db2.9
DEsingle1.22.0
ChAMP2.32.0
StanHeaders2.32.6
sjlabelled1.2.0
rio1.0.1
preseqR4.0.0
performance0.11.0
paxtoolsr1.36.0
NLP0.2-1
network1.18.2
minpack.lm1.2-4
metadat1.2-0
impute1.76.0
IlluminaHumanMethylation27k.db1.4.8
ggdag0.2.12
fdapace0.5.9
doParallel1.0.17
DelayedMatrixStats1.24.0
bootstrap2019.6
affyio1.72.0
TCGAbiolinksGUI.data1.22.0
sys3.4.2
siggenes1.76.0
sets1.0-25
secretbase0.4.0
rredlist0.7.1
ps1.7.6
pedigreemm0.3-4
nonnest20.5-6
Matrix1.6-5
logging0.10-108
lavaan0.6-17
ggplot.multistats1.0.0
ggplot23.5.1
GenomicScores2.14.3
fda6.1.8
xtable1.8-4
VariantAnnotation1.48.1
TFBSTools1.40.0
spatstat.linnet3.1-5
rstanarm2.32.1
Rsamtools2.18.0
phyclust0.1-34
MBESS4.9.3
IlluminaHumanMethylation450kanno.ilmn12.hg190.6.1
GenomicRanges1.54.1
BB2019.10-1
widgetTools1.80.0
tab5.1.1
reactlog1.1.1
pvclust2.2-0
processx3.8.4
nleqslv3.3.5
Matching4.10-14
maftools2.18.0
linprog0.9-4
jomo2.7-6
iotools0.3-5
htmlTable2.4.2
HSMMSingleCell1.22.0
gtable0.3.5
gee4.13-26
gap.datasets0.0.6
gamlss.dist6.1-1
fpc2.2-11
falconx0.2
BSgenome.Dmelanogaster.UCSC.dm21.4.0
BANOVA1.2.1
vctrs0.6.5
truncdist1.0-2
seqinr4.2-36
PolynomF2.0-8
org.Rn.eg.db3.18.0
lmm1.4
ggnewscale0.4.10
DEoptim2.2-8
CircStats0.2-6
taxize0.9.100
survRM21.0-4
OrganismDbi1.44.0
OpenImageR1.3.0
opencv0.4.0
nucleR2.34.0
mouse4302frmavecs1.5.0
minfi1.48.0
loomR0.2.0
Iso0.0-21
hgu133a2cdf2.18.0
goseq1.54.0
futile.options1.0.1
wikitaxa0.4.0
shinyBS0.61.1
segmented2.0-4
scater1.30.1
rstpm21.6.3
reshape21.4.4
rentrez1.2.3
PKI0.1-12
pcaPP2.0-4
paran1.5.3
NuPoP2.10.0
magrittr2.0.3
kSamples1.2-10
KEGGdzPathwaysGEO1.40.0
infer1.0.7
gmp0.7-4
ggsignif0.6.4
ggjoy0.4.1
gbm2.1.9
FME1.3.6.3
DRR0.0.4
Category2.68.0
ash1.0-15
tools4.3.2
RTCGA1.32.0
rmeta3.0
RLRsim3.1-8
repr1.1.7
gamlss5.4-22
curry0.1.1
cubature2.1.0
biocViews1.70.0
arrayQualityMetrics3.58.0
scDD1.26.0
RWeka0.4-46
R.oo1.26.0
RhpcBLASctl0.23-42
pbmcapply1.5.1
IlluminaHumanMethylationEPICmanifest0.3.0
ggrastr1.0.2
CompQuadForm1.4.3
biomformat1.30.0
vegan2.6-4
tximportData1.30.0
scrime1.3.5
scde2.30.0
rjags4-15
quantsmooth1.68.0
mvnfast0.2.8
muhaz1.2.6.4
janitor2.2.0
itertools0.1-3
IlluminaHumanMethylationEPICanno.ilm10b2.hg190.6.0
ICS1.4-1
googleAuthR2.0.1
DiceDesign1.10
corrplot0.92
circlize0.4.16
BiocVersion3.18.1
visNetwork2.1.2
tilingArray1.80.0
swagger3.33.1
spatstat3.0-8
scatterpie0.2.2
RItools0.3-3
party1.3-14
monocle31.3.4
MendelianRandomization0.10.0
LogicReg1.6.6
gageData2.40.0
earth5.3.3
Canopy1.3.0
urca1.3-3
txtplot1.0-4
SparseM1.81
rnoaa1.4.0
NBPSeq0.3.1
msir1.3.3
mosaicCore0.9.4.0
iClusterPlus1.38.0
GenomeInfoDb1.38.5
DOT0.1
trust0.1-8
tictoc1.2.1
spatstat.sparse3.0-3
servr0.30
Rhdf5lib1.24.2
polspline1.1.24
pkgdown2.0.9
pals1.8
later1.3.2
kpmt0.1.0
GEOmetadb1.64.0
gaussquad1.0-3
ensembldb2.26.0
copula1.1-3
canine2.db3.13.0
BH1.84.0-0
rgl1.3.1
RANN2.6.1
PROcess1.78.0
ggstats0.6.0
geneplotter1.80.0
fishplot0.5.2
egg0.4.5
ActCR0.3.0
urlchecker1.0.1
synchronicity1.3.10
RcppParallel5.1.7
qrng0.0-10
proxy0.4-27
pd.mouse430.23.12.0
parallelly1.37.1
metafor4.6-0
leidenbase0.1.27
KEGGREST1.42.0
hapsim0.31
GenomicDataCommons1.26.0
cvAUC1.1.4
basilisk1.14.3
aod1.3.3
zeallot0.1.0
rsconnect1.2.2
R.devices2.17.2
progressr0.14.0
patchwork1.2.0
org.Cf.eg.db3.18.0
inflection1.3.6
IlluminaHumanMethylation27kmanifest0.4.0
hgu133plus2.db3.13.0
fds1.8
dplyr1.1.4
dials1.2.1
cometExactTest0.1.5
combinat0.0-8
cmm1.0
methods4.3.2
useful1.2.6.1
trimcluster0.1-5
survey4.4-2
SQUAREM2021.1
reticulate1.36.1
phia0.3-1
pfamAnalyzeR1.2.0
ParamHelpers1.14.1
gaston1.6
clock0.7.0
affxparser1.74.0
weights1.0.4
tseries0.10-55
TFisher0.2.0
tensorflow2.16.0
tensor1.5
Repitools1.48.0
R2HTML2.3.3
JM1.5-2
hgu95av2probe2.18.0
fastICA1.2-4
compute.es0.2-5
bigmemory4.6.4
sourcetools0.1.7-1
PFAM.db3.18.0
PAIRADISE1.18.0
modelenv0.1.1
lsa0.73.3
iCNV1.22.0
future.batchtools0.12.1
EBImage4.44.0
ddalpha1.3.15
ctc1.76.0
clusterGeneration1.3.8
broom.mixed0.2.9.5
AnnotationHub3.10.1
svUnit1.0.6
stringfish0.16.0
shinytest1.5.4
scattermore1.2
mixOmics6.26.0
misc3d0.9-1
metap1.10
IlluminaHumanMethylationEPICanno.ilm10b4.hg190.6.0
hwriter1.3.2.1
gmm1.8
Glimma2.12.0
ddCt1.58.0
bc3net1.0.4
WikipediR1.7.1
RSpectra0.16-1
RColorBrewer1.1-3
perry0.3.1
networkLite1.0.5
minqa1.2.6
maxLik1.5-2.1
htmlwidgets1.6.4
hgu133a2.db3.13.0
HDF5Array1.30.1
ggridges0.5.6
gcrma2.74.0
epiDisplay3.5.0.2
bumphunter1.44.0
compiler4.3.2
TxDb.Hsapiens.UCSC.hg38.knownGene3.18.0
timeDate4032.109
sysfonts0.8.9
statnet2019.6
stargazer5.2.3
S4Arrays1.2.1
GOSim1.40.0
getPass0.2-4
Ecfun0.3-2
Ecdat0.4-2
dotCall641.1-1
beanplot1.3.1
foreign0.8-86
roxygen27.3.1
robCompositions2.4.1
NMOF2.8-0
limma3.58.1
hu6800cdf2.18.0
hierfstat0.5-11
hgu133a2probe2.18.0
gower1.0.1
fontLiberation0.1.0
flexclust1.4-1
ca0.71.1
WES.1KG.WUGSC1.34.0
usethis2.2.3
textshaping0.3.7
shiny1.8.1.1
R2jags0.7-1.1
quadprog1.5-8
phylobase0.8.12
microbenchmark1.4.10
MAVE1.3.11
graphlayouts1.1.1
GEOquery2.70.0
gdsfmt1.38.0
fitdistrplus1.1-11
ChAMPdata2.34.0
argparser0.7.2
adegenet2.1.10
threg1.0.3
targets1.7.0
scatterplot3d0.3-44
promises1.3.0
prettydoc0.4.1
poilog0.4.2
pkgmaker0.32.10
pd.mogene.2.0.st3.14.1
pbapply1.7-2
llogistic1.0.3
humanomni5quadv1bCrlmm1.0.0
ellipse0.5.0
doSNOW1.0.20
data.tree1.1.0
topGO2.54.0
squash1.0.9
spatstat.explore3.2-7
s21.1.6
Runuran0.38
rngWELL0.10-9
rmdformats1.0.4
posterior1.5.0
NbClust3.0.1
ICC2.4.0
fBasics4032.96
extrafontdb1.0
dfidx0.0-5
xgboost1.7.7.1
viridis0.6.5
universalmotif1.20.0
treeio1.26.0
survivalAnalysis0.3.0
stepPlr0.93
pillar1.9.0
parmigene1.1.0
packrat0.9.2
MKmisc1.9
insight0.19.10
hms1.1.3
hgu95av2cdf2.18.0
fftwtools0.9-11
etm1.1.1
cvTools0.3.3
coxme2.2-18.1
BWStest0.2.3
bkmr0.2.2
bindrcpp0.2.3
aroma.light3.32.0
XVector0.42.0
units0.8-5
samr3.0
registry0.5-1
recipes1.0.10
intansv1.42.0
ICSNP1.1-2
feather0.3.5
extrafont0.19
base642.0.1
worrms0.4.3
wk0.9.1
venneuler1.1-4
VanillaICE1.64.1
ucminf1.2.1
svglite2.1.3
SingleR2.4.1
Rsolnp1.16
reshape0.8.9
phytools2.1-1
mzID1.40.0
MPO.db0.99.7
km.ci0.5-6
isobar1.48.0
highr0.10
filehash2.4-5
densvis1.12.1
CVST0.2-3
arm1.14-4
nnet7.3-19
mgcv1.9-1
venn1.12
sendmailR1.4-0
rappdirs0.3.3
plot3Drgl1.0.4
pepr0.5.0
multcomp1.4-25
ggalt0.4.0
dtw1.23-1
cosinor1.2.3
broom1.0.5
00LOCK-ncvregunknown
translations4.3.2
rbibutils2.2.16
polynom1.4-1
org.Mm.eg.db3.18.0
msigdbr7.5.1
chromVAR1.24.0
BSgenome.Hsapiens.UCSC.hg191.4.3
BPSC0.99.2
sensitivity1.30.0
pbatR2.2-17
mclogit0.9.6
lmom3.0
GlobalOptions0.1.2
ggfun0.1.4
geiger2.0.11
episensr1.3.0
compositions2.0-8
colorspace2.1-0
BSgenome.Scerevisiae.UCSC.sacCer11.4.0
BSgenome.Hsapiens.UCSC.hg181.3.1000
bold1.3.0
splines4.3.2
yulab.utils0.1.4
scuttle1.12.0
rae230aprobe2.18.0
pvca1.42.0
proto1.0.0
oompaData3.1.3
jsonlite1.8.8
gert2.0.1
foreach1.5.2
forcats1.0.0
fgsea1.28.0
farver2.1.1
FactoMineR2.11
expm0.999-9
credentials2.0.1
BSgenome.Scerevisiae.UCSC.sacCer21.4.0
BiocManager1.30.22
subSeq1.32.0
snowFT1.6-1
pipeFrame1.18.0
NMF0.27
logicFS2.22.0
LGEWIS1.1
IsoformSwitchAnalyzeR2.2.0
fit.models0.64
ExomeDepth1.1.16
BiocGenerics0.48.1
codetools0.2-19
sna2.7-2
sgeostat1.0-27
R.methodsS31.8.2
multicool1.0.1
MSQC1.1.0
lmerTest3.1-3
lazyeval0.2.2
ergm.multi0.2.1
ROpenCVLite4.90.0
rBayesianOptimization1.2.1
quantreg5.97
plumber1.2.2
pbs1.1
natserv1.0.0
naivebayes1.0.0
maps3.4.2
lmtest0.9-40
interactiveDisplayBase1.40.0
flexsurv2.3
flexmix2.3-19
CODEX21.3.0
BSgenome.Mmusculus.UCSC.mm101.4.3
bayesplot1.11.1
accelerometry3.1.2
WikidataR2.3.3
TxDb.Rnorvegicus.UCSC.rn4.ensGene3.2.2
TrajectoryUtils1.10.1
strex2.0.0
spls2.2-3
shinyFiles0.9.3
sesame1.20.0
RInside0.2.18
readr2.1.5
rainbow3.8
PMA1.2-3
packcircles0.3.6
IRkernel1.3.2
IlluminaHumanMethylation27kanno.ilmn12.hg190.6.0
biclust2.0.3.1
stats44.3.2
base4.3.2
tradeSeq1.16.0
tclust2.0-3
synthpop1.8-0
S4Vectors0.40.2
rstatix0.7.2
proj41.0-13
oz1.0-22
ordinal2023.12-4
mitools2.4
meta7.0-0
googledrive2.1.1
ggpubr0.6.0
bitops1.0-7
affycomp1.78.0
WriteXLS6.5.0
waveslim1.8.4
TxDb.Hsapiens.UCSC.hg19.knownGene3.2.2
tsna0.3.5
thgenetics0.4-2
sparseMatrixStats1.14.0
readxl1.4.3
RcppRoll0.3.0
RcppProgress0.4.2
postlogic0.1.0.1
ggeffects1.5.2
emmeans1.10.1
chk0.9.1
betareg3.1-4
bbmle1.0.25.1
BatchJobs1.9
UpSetR1.4.0
timsac1.3.8-4
tergm4.2.0
subplex1.8
spatstat.model3.2-11
sparsesvd0.2-2
rversions2.1.2
pd.genomewidesnp.63.14.1
oompaBase3.2.9
iterpc0.4.2
irr0.84.1
IRdisplay1.1
gmodels2.19.1
Exact3.2
withr3.0.0
tidyr1.3.1
spatstat.data3.0-4
showimage1.0.0
RWiener1.3-3
RSQLite2.3.6
RcppAnnoy0.0.22
perm1.0-0.4
lava1.8.0
intervals0.15.4
FlowSorted.Blood.EPIC2.6.0
EBSeq2.0.0
zebrafishcdf2.18.0
vroom1.6.5
TxDb.Mmusculus.UCSC.mm9.knownGene3.2.2
robustHD0.8.0
RNOmni1.0.1.2
quantmod0.4.26
pastecs1.4.2
nplplot4.7
mutoss0.1-13
hdf5r1.3.10
glmpath0.98
extRemes2.1-4
chipseq1.52.0
biglm0.9-2.1
tsne0.1-3.1
tripack1.3-9.1
TailRank3.2.2
rsample1.2.1
qqconf1.3.2
orthogonalsplinebasis0.1.7
modeltools0.2-23
maxstat0.7-25
glmmML1.1.6
GenomicAlignments1.38.2
fail1.3
enpls6.1
brew1.0-10
vsn3.70.0
survminer0.4.9
relimp1.0-5
RcppNumerical0.6-0
pan1.9
mmap0.6-22
listenv0.9.1
gh1.4.1
DynDoc1.80.0
denstrip1.5.4
CpGassoc2.60
ConsensusClusterPlus1.66.0
bios2mds1.2.3
beachmat2.18.1
tcltk4.3.2
workflows1.1.4
wdm0.2.4
rhdf5filters1.14.1
prabclus2.3-3
orthopolynom1.0-6.1
NADA1.6-1.1
KEGGgraph1.62.0
ggstance0.3.7
gdtools0.3.7
gamm40.2-6
epiR2.0.73
dygraphs1.1.1.6
dmrseq1.22.1
tidyverse2.0.0
systemPipeR2.8.0
ruv0.9.7.1
robust0.7-4
Rfit0.24.6
mutSignatures2.1.1
minet3.60.0
hgu95av22.2.0
getopt1.20.4
geomorph4.0.7
fftw1.0-8
falcon0.2
evd2.3-7
diptest0.77-1
dimRed0.2.6
depmap1.16.0
dendextend1.17.1
conditionz0.1.0
ChromHeatMap1.56.0
chopsticks1.68.0
arrangements1.1.9
yardstick1.3.1
truncnorm1.0-9
systemfit1.1-30
RApiSerialize0.1.2
randomForestSRC3.2.3
plot3D1.4.1
optparse1.7.5
optimParallel1.0-2
mclust6.1
manipulateWidget0.11.1
logistf1.26.0
hash2.2.6.3
ggrepel0.9.5
ggbio1.50.0
EMCluster0.2-15
coxphf1.13.4
ClusterR1.3.2
brms2.21.0
bridgesampling1.1-2
ASSET2.20.0
webutils1.2.0
uwot0.2.2
SKAT2.2.5
shinystan2.6.0
scran1.30.2
mnormt2.1.1
kernlab0.9-32
HIBAG1.38.2
h2o3.44.0.3
grpreg3.4.0
gbRd0.4-11
clisymbols1.2.0
supraHex1.40.0
Seurat5.0.1
sctransform0.4.1
rtracklayer1.62.0
Rmisc1.5.1
reprex2.1.0
officer0.6.5
micEcon0.6-18
callr3.7.6
XML3.99-0.16.1
Rttf2pt11.3.12
restfulr0.0.15
ratelimitr0.4.1
nnls1.5
mlr2.19.1
esATAC1.24.0
crlmm1.60.0
cobs1.3-8
splines20.5.1
RJSONIO1.3-1.9
RcmdrPlugin.TeachingDemos1.2-0
RcmdrMisc2.9-1
Rcmdr2.9-2
oligo1.66.0
mvbutils2.8.232
irlba2.3.5.1
gtools3.9.5
Epi2.48
eha2.11.4
cubelyr1.0.2
clipr0.8.0
cghMCR1.60.0
Biostrings2.70.3
aplot0.2.2
zip2.3.1
yaml2.3.8
xlsx0.6.5
Rdpack2.6
rae230acdf2.18.0
networkD30.4
mets1.3.4
genefilter1.84.0
formula.tools1.7.1
epitools0.5-10.1
EDASeq2.36.0
dynamicTreeCut1.63-1
distributional0.4.0
datawizard0.10.0
cluster2.1.6
tidymodels1.2.0
signatureSearch1.16.0
sfsmisc1.1-17
seewave2.2.3
Rhtslib2.4.1
RcppZiggurat0.1.6
qap0.1-2
pander0.6.5
optextras2019-12.4
ini0.3.1
Illumina450ProbeVariants.db1.38.0
frma1.54.0
ExperimentHub2.10.0
EnvStats2.8.1
dtwclust5.5.12
doFuture1.0.1
SuppDists1.1-9.7
supclust1.1-1
slam0.1-50
shinythemes1.2.0
SeuratObject5.0.1
rprojroot2.0.4
PSCBS0.67.0
pixmap0.4-12
nycflights131.0.2
HardyWeinberg1.7.8
ggtext0.1.2
ecodist2.1.3
EBarrays2.66.0
devEMF4.4-2
cOde1.1.1
cachem1.0.8
xmlparsedata1.0.5
workflowsets1.1.0
TFMPvalue0.0.9
ROC1.78.0
princurve2.1.6
hoardr0.5.4
gRbase2.0.1
GeneRegionScan1.58.0
fmsb0.7.6
fastmap1.1.1
fastcluster1.2.6
DT0.33
doBy4.6.20
verification1.42
SpatialExperiment1.12.0
ROpenCVunknown
rootSolve1.8.2.4
RcppHNSW0.6.0
qtl1.66
kde1d1.0.7
httpuv1.6.15
HMMcopy1.44.0
fission1.22.0
fibroEset1.44.0
emdbook1.3.13
drc3.0-1
broman0.80
Biobase2.62.0
bigmemory.sri0.1.8
assertthat0.2.1
V84.4.2
udunits20.13.2.1
tune1.2.1
selectr0.4-2
ROCR1.0-11
permute0.9-7
pcaMethods1.94.0
np0.60-17
mvmeta1.0.3
interp1.1-6
hthgu133aprobe2.18.0
GenSA1.1.14
FDb.InfiniumMethylation.hg192.2.0
fastDummies1.7.3
dvmisc1.1.4
directlabels2024.1.21
car3.1-2
broom.helpers1.15.0
BiocNeighbors1.20.2
AnnotationDbi1.64.1
animation2.7
wordcloud2.6
sandwich3.1-0
pspline1.0-19
methylKit1.28.0
memoise2.0.1
IlluminaHumanMethylation450kmanifest0.4.0
FlowSorted.CordBlood.450k1.30.0
FDb.InfiniumMethylation.hg182.2.0
DOSE3.28.2
debugme1.1.0
corrgram1.14
c3net1.1.1.1
zoo1.8-12
triangle1.0
tis1.39
splancs2.01-44
SimpleITK2.1.1.1
scRNAseq2.16.0
pd.mogene.1.0.st.v13.14.1
parameters0.21.6
mcmcplots0.4.3
graphite1.48.0
GGally2.2.1
actuar3.3-4
utf81.2.4
TxDb.Hsapiens.UCSC.hg18.knownGene3.2.2
simsurv1.0.0
rncl0.8.7
RcppThread2.1.7
poweRlaw0.80.0
HPO.db0.99.2
GSVA1.50.5
ergm4.6.0
dtplyr1.3.1
collapse2.0.13
bezier1.1.2
bayesm3.1-6
AnnotationFilter1.26.0
svgPanZoom0.3.4
ResourceSelection0.3-6
philentropy0.8.0
mice3.16.0
lubridate1.9.3
lme41.1-35.3
hardhat1.3.1
GO.db3.18.0
GMMAT1.4.2
geepack1.3.10
Formula1.2-5
cummeRbund2.44.0
catmap1.6.4
splitstackshape1.4.8
RnBeads2.20.0
RcppEigen0.3.4.0.0
modeldata1.3.0
MLInterfaces1.82.0
ISwR2.0-8
ineq0.2-13
glue1.7.0
GenomicFeatures1.54.4
doMC1.3.8
dendsort0.3.4
dbplyr2.5.0
checkmate2.3.1
calibrate1.7.7
BSgenome.Mmusculus.UCSC.mm91.4.0
tidytree0.4.6
stringi1.8.3
rARPACK0.11-0
plogr0.2.0
nabor0.5.0
klaR1.7-3
infotheo1.2.0.1
glmmTMB1.1.9
ggmap4.0.0
genalg0.2.1
effectsize0.8.7
dynlm0.3-6
Deriv4.1.3
crul1.4.2
AlgDesign1.2.1
TxDb.Dmelanogaster.UCSC.dm3.ensGene3.2.2
tmvtnorm1.6
tfautograph0.3.2
sodium1.3.1
regioneR1.34.0
PureCN2.8.1
pbivnorm0.6.0
networkDynamic0.11.4
leaps3.1
gistr0.9.0
gam1.22-3
fs1.6.3
ffpe1.46.0
fastseg1.48.0
DirichletMultinomial1.44.0
agricolae1.3-7
themis1.0.2
spatstat.geom3.2-9
sass0.4.9
RMariaDB1.3.1
read.gt3x1.2.0
pscl1.5.9
numbers0.8-5
multtest2.58.0
isoband0.2.7
GIGrvg0.8
genoPlotR0.8.11
fANCOVA0.6-1
effects4.2-2
distr2.9.3
devtools2.4.5
deSolve1.40
crayon1.5.2
ChIPQC1.38.0
biomaRt2.58.2
Wrench1.20.0
timechange0.3.0
stabs0.6-4
ReportingTools2.42.3
qcc2.7
preprocessCore1.64.0
mixtools2.0.0
lumi2.54.0
igraph2.0.3
GSEABase1.64.0
GSA1.03.3
GreyListChIP1.34.0
glasso1.11
GenomicDistributions1.10.0
conquer1.3.3
classInt0.4-10
ChIPseqR1.56.0
cellranger1.1.0
arrow15.0.1
apcluster1.4.12
admisc0.35
survMisc0.5.6
SuperLearner2.0-29
snakecase0.11.1
sm2.2-6.0
SeqArray1.42.2
RSNNS0.4-17
metapod1.10.1
manipulate1.0.1
goftest1.2-3
diffobj0.3.5
CNTools1.58.0
BSgenome.Hsapiens.1000genomes.hs37d50.99.1
BRAIN1.48.0
BBmisc1.13
AllelicImbalance1.40.0
ScaledMatrix1.10.0
ritis1.0.0
RcppGSL0.3.13
parallelMap1.5.1
mouse4302cdf2.18.0
merTools0.6.2
ieugwasr1.0.0
HSAUR21.1-20
GenomeInfoDbData1.2.11
fontquiver0.2.1
EGSEAdata1.30.0
dbscan1.1-12
caret6.0-94
BiocBaseUtils1.4.0
BiasedUrn2.0.11
tmvnsim1.0-2
snowfall1.84-6.3
signatureSearchData1.16.0
rematch2.0.0
Rbowtie22.8.0
randomForest4.7-1.1
MLmetrics1.1.3
labeling0.4.3
fastGHQuad1.0.1
BubbleTree2.32.0
bdsmatrix1.3-7
afex1.3-1
xts0.13.2
tidytidbits0.3.2
sn2.1.1
rstantools2.4.0
ranger0.16.0
operator.tools1.6.3
OpenMx2.21.11
motifStack1.46.0
lpsymphony1.30.0
forecast8.22.0
diagram1.6.5
nlme3.1-164
statnet.common4.9.0
sjstats0.18.2
rvcheck0.2.1
RProtoBufLib2.14.1
poorman0.2.7
mda0.5-4
matrixStats1.3.0
hgu133plus2cdf2.18.0
girafe1.54.0
ggvis0.4.9
BSgenome.Celegans.UCSC.ce21.4.0
beeswarm0.4.0
annotate1.80.0
affycoretools1.74.0
singscore1.22.0
sem3.1-15
pseval1.3.1
mitml0.4-5
JGR1.9-2
heatmaply1.5.0
argparse2.2.3
VennDiagram1.7.3
TTR0.24.4
sjPlot2.8.15
runjags2.2.2-1.1
Rmpi0.7-2
kinship21.9.6.1
here1.0.1
gpls1.74.0
GENESIS2.32.0
forestplot3.1.3
doRNG1.8.6
chron2.3-61
arrayQuality1.80.0
xlsxjars0.6.1
qgraph1.9.8
PearsonDS1.3.1
MCMCpack1.7-0
hmmm1.0-4
haplo.stats1.9.5.1
gitcreds0.1.2
desc1.4.3
BSgenome.Hsapiens.UCSC.hg381.4.5
Brobdingnag1.2-9
BiocFileCache2.10.2
babelgene22.9
sitmo2.0.2
RUVSeq1.36.0
rsvg2.6.0
pracma2.4.4
pacman0.5.1
mogene20sttranscriptcluster.db8.8.0
MESS0.5.12
grImport0.9-7
commonsMath1.2.8
blme1.0-5
askpass1.2.0
TSP1.2-4
SnowballC0.7.1
SC31.30.0
satuRn1.10.0
RBGL1.78.0
R2OpenBUGS3.2-3.2.1
purrr1.0.2
lpSolve5.6.20
lambda.r1.2.4
labelled2.13.0
influenceR0.1.5
ggsci3.0.3
doMPI0.2.2
docopt0.7.1
dlm1.1-6
DelayedArray0.28.0
cyclocomp1.1.1
clusterProfiler4.10.1
SRAdb1.64.0
signeR2.4.0
schoolmath0.4.2
rlang1.1.3
RcppArmadillo0.12.8.2.1
R2admb0.7.16.3
qqman0.1.9
mlogit1.1-1
locfdr1.1-8
LearnBayes2.15.1
lawstat3.6
kknn1.3.1
karyoploteR1.28.0
gclus1.3.2
curl5.2.1
covr3.6.4
cancerTiming3.1.8
aplpack1.3.5
acepack1.4.2
SparseArray1.2.4
solrium1.2.0
sf1.0-15
rsvd1.0.5
rmutil1.1.10
rlemon0.2.1
mcbiopi1.1.6
logitnorm0.8.39
lintr3.1.2
leiden0.4.3.1
kableExtra1.4.0
GWASTools1.48.0
GGIR3.0-9
BradleyTerry21.1-2
BeadDataPackR1.54.0
bamsignals1.34.0
Amelia1.8.2
grid4.3.2
shinyjs2.1.0
sessioninfo1.2.2
multcompView0.1-10
mime0.12
MALDIquant1.22.2
Hmisc5.1-2
CBPS0.23
BMA3.18.17
BiocStyle2.30.0
AER1.2-12
WikidataQueryServiceR1.0.0
tm0.7-13
SomaticSignatures2.38.0
scales1.3.0
ragg1.3.0
nortest1.0-4
MRInstruments0.3.2
metagenomeSeq1.43.0
isdparser0.4.0
gplots3.1.3.1
formatR1.14
evaluate0.23
ellipsis0.3.2
amap0.8-19
WGCNA1.72-5
superpc1.12
startupmsg0.9.6.1
spelling2.3.0
shinycssloaders1.0.0
seqbias1.50.0
rms6.8-0
RcppML0.3.7
plotrix3.8-4
MVA1.0-8
memisc0.99.31.7
LaplacesDemon16.1.6
Homo.sapiens1.3.1
HKprocess0.1-1
hapmapsnp61.44.0
gridExtra2.3
GPfit1.0-8
geeM0.10.1
FSA0.9.5
crosstalk1.2.1
affy1.80.0
waldo0.5.2
splus2R1.3-5
smoother1.3
SIS0.8-8
Signac1.13.0
renv1.0.7
prodlim2023.08.28
ppclust1.1.0.1
MutationalPatterns3.12.0
matlab1.0.4
IRanges2.36.0
igraphdata1.0.1
Gmisc3.0.3
ggplotify0.1.2
geneLenDataBase1.38.0
etrunct0.1
entropy1.3.1
coda0.19-4.1
Affymoe4302Expr1.40.0
tfruns1.5.3
rstudioapi0.16.0
RiboProfiling1.32.0
pkgconfig2.0.3
pedgene3.9
nor1mix1.3-3
JASPAR20161.30.0
htmltools0.5.8.1
globals0.16.3
ggExtra0.10.1
ff4.0.12
abind1.4-5
rpart4.1.23
umap0.2.10.0
R.filesets2.15.1
Nozzle.R11.1-1.1
mvtnorm1.2-4
MRMix0.1.0
MatrixGenerics1.14.0
googleVis0.7.1
future1.33.2
EnsDb.Hsapiens.v862.99.0
DiagrammeR1.0.11
convert1.78.0
lattice0.22-5
XLConnect1.0.9
tkrplot0.0-27
sesameData1.20.0
seqminer9.4
Rtsne0.17
pso1.0.4
logisticPCA0.2
GWASdata1.40.0
FD1.0-12.3
BSgenome1.70.2
arrayhelpers1.1-0
tfestimators1.9.2
sjmisc2.8.9
shape1.4.6.1
R.utils2.12.3
lhs1.1.6
httpcode0.3.0
gridBase0.4-7
genomeIntervals1.58.0
furrr0.3.1
bookdown0.39
backports1.4.1
affyPLM1.78.0
tcltk21.2-11
showtextdb3.0
shinyWidgets0.8.6
Rsubread2.16.1
rgexf0.16.2
rgenoud5.9-0.10
projpred2.8.0
polyclip1.10-6
microbiome1.24.0
magick2.8.2
ltsa1.4.6
InteractionSet1.30.0
Icens1.74.0
hgu133a.db3.13.0
GENEAread2.0.10
estimability1.5
digest0.6.35
BiocParallel1.36.0
writexl1.5.0
TxDb.Mmusculus.UCSC.mm10.knownGene3.10.0
sp2.1-3
SingleCellExperiment1.24.0
Rgraphviz2.46.0
parsnip1.2.1
ncvreg3.14.2
mscstts0.6.3
mixmeta1.2.0
HDO.db0.99.1
FateID0.2.2
clValid0.7
anytime0.3.9
xaringan0.30
webshot0.5.5
terra1.7-65
survivalROC1.0.3.1
RgoogleMaps1.5.1
rBiopaxParser2.42.0
pseudo1.4.3
plsVarSel0.9.11
pkgcond0.1.1
ModelMetrics1.2.2.2
knitr1.46
jpeg0.1-10
JASPAR20181.1.1
gridSVG1.7-5
FNN1.1.4
exactRankTests0.8-35
changepoint2.2.4
carData3.0-5
biovizBase1.50.0
xml21.3.6
tximport1.30.0
timereg2.0.5
texreg1.39.3
stringr1.5.1
praznik11.0.0
phangorn2.11.1
OmicCircos1.40.0
jquerylib0.1.4
flextable0.9.5
BSgenome.Ecoli.NCBI.200808051.3.1000
bluster1.12.0
VIM6.2.2
tuneR1.4.7
tibble3.2.1
RNeXML2.4.11
Rglpk0.6-5
penalized0.9-52
munsell0.5.1
HiClimR2.2.1
gProfileR0.7.0
gnm1.1-5
gap1.5-3
facets0.6.2
energy1.7-11
dgof1.4
Cairo1.6-2
BSgenome.Cfamiliaris.UCSC.canFam21.4.0
ASCAT3.1.2
ActiveDriver1.0.0
sloop1.0.1
R.huge0.10.1
Rcpp1.0.12
png0.1-8
missMethyl1.36.0
MatrixModels0.5-3
gsmoothr0.1.7
forge0.2.0
fastmatch1.1-4
BiocIO1.12.0
aroma.apd0.7.0
aroma.affymetrix3.2.2
aCGH1.80.0
triebeard0.4.1
subselect0.15.5
SNPRelate1.36.1
showtext0.9-7
ShortRead1.60.0
QuickJSR1.1.3
profileModel0.6.1
mixdist0.5-5
miscF0.1-5
mi1.1
mboost2.9-9
genomewidesnp6Crlmm1.0.7
generics0.1.3
float0.3-2
ergm.count4.1.1
EpiDynamics0.3.1
enrichplot1.22.0
Deducer0.7-9
cleaver1.40.0
systemfonts1.0.6
stringdist0.9.12
som0.3-5.1
setRNG2024.2-1
mlbench2.1-3.1
lpSolveAPI5.5.2.0-17.11
gridGraphics0.5-1
ggforce0.4.2
FlowSorted.Blood.450k1.40.0
findpython1.0.8
fields15.2
downlit0.4.3
aws2.5-5
annotatr1.28.0
Manage your own packages
top

Per-user R library

Users can install their own packages. By default, on RHEL8, this private library is located at /data/$USER/R/rhel8/%v where %v is the major.minor version of R (e.g. 4.3). This is a change from the behaviour on RHEL7 where the default location was ~/R/%v/library. Note that this directory is not created automatically by some versions of R so it is safest to create it manually before installing R packages.

Users can choose alternative locations for this directory by setting and exporting the environment variable $R_LIBS_USER in your shell startup script. If you are a bash user, for example, you could add the following line to your ~/.bash_profile to relocated your R library:

export R_LIBS_USER="/data/$USER/code/R/rhel8/%v"

Here is an example using the pacman package for easier package management:

[user@cn3144 ~]$ R

R version 4.1.0 (2021-05-18) -- "Camp Pontanezen"
Copyright (C) 2021 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

> library(pacman)
> p_isinstalled(rapport)
[1] FALSE
> p_install(rapport)
Installing package into ‘/spin1/home/linux/user/R/4.1/library’
(as ‘lib’ is unspecified)
also installing the dependency ‘rapportools’

trying URL 'http://cran.rstudio.com/src/contrib/rapportools_1.0.tar.gz'

[...snip...]
rapport installed
>

More granular (and reproducible) package management

A better approach than relying on packages installed centrally or in your home directory is to create isolated, per project package sets. This increases reproducibility at the cost of increased storage and potential package installation headaches. Some packages to implement this:

R batch job
top

R batch jobs are similar to any other batch job. A batch script ('rjob.sh') is created that sets up the environment and runs the R code:

#!/bin/bash

module load R/4.2
R --no-echo --no-restore --no-save < /data/user/Rtests/Rtest.r > /data/user/Rtests/Rtest.out

or use Rscript instead

#!/bin/bash

module load R/3.5
Rscript /data/user/Rtests/Rtest.r > /data/user/Rtests/Rtest.out

Submit this job using the Slurm sbatch command.

sbatch [--cpus-per-task=#] [--mem=#] rjob.sh

Command line arguments for R scripts

R scripts can be written to accept command line arguments. The simplest way of doing this is with the commandArgs() function. For example the script 'simple_args.R'

args <- commandArgs(trailingOnly=TRUE)

i <- 0
for (arg in args) {
    i <- i + 1
    cat(sprintf("arg %02i: '%s'\n", i, arg))
}

can be called like this

[user@cn3144]$ module load R
[user@cn3144]$ Rscript simple.R this is a test
arg 01: 'this'
arg 02: 'is'
arg 03: 'a'
arg 04: 'test'
[user@cn3144]$ Rscript simple.R 'this is a test'
arg 01: 'this is a test'
[user@cn3144]$ R --no-echo --no-restore --no-save --args 'this is a test' < simple.R
arg 01: 'this is a test'

Alternatively, commandline arguments can be parsed using the getopt package. For example:

library(getopt)

###
### Describe the expected command line arguments
###
# mask: 0=no argument
#       1=required argument
#       2=optional argument
spec <- matrix(c(
# long name  short name  mask  type          description(optional)
# ---------  ----------  ----  ------------  ---------------------
  'file'   , 'f',          1,  'character',  'input file',
  'verbose', 'v',          0,  'logical',    'verbose output', 
  'help'   , 'h',          0,  'logical',    'show this help message'
), byrow=TRUE, ncol=5);

# parse the command line
opt <- getopt(spec);

# show help if requested
if (!is.null(opt$help)) {
  cat(getopt(spec, usage=TRUE));
  q();
}

# set defaults
if ( is.null(opt$file) )    { opt$file    = 'testfile' }
if ( is.null(opt$verbose) ) { opt$verbose = FALSE }
print(opt)

This script an be used as follows

[user@cn3144]$ Rscript getopt_example.R --file some.txt --verbose
$ARGS
character(0)

$file
[1] "some.txt"

$verbose
[1] TRUE

[user@cn3144]$ Rscript getopt_example.R --file some.txt
$ARGS
character(0)

$file
[1] "some.txt"

$verbose
[1] FALSE

[user@cn3144]$ Rscript getopt_example.R --help
Usage: getopt_example.R [-[-file|f] ] [-[-verbose|v]] [-[-help|h]]
    -f|--file       input file
    -v|--verbose    verbose output
    -h|--help       show this help message

getopt does not have support for mixing flags and positional arguments. There are other packages with different features and approaches that can be used to design command line interfaces for R scripts.

Swarm of R jobs
top

A swarm of jobs is an easy way to submit a set of independent commands requiring identical resources.

Create a swarmfile (e.g. rjobs.swarm). For example:

Rscript /data/user/R/R1  > /data/user/R/R1.out
Rscript /data/user/R/R2  > /data/user/R/R2.out
Rscript /data/user/R/R3  > /data/user/R/R3.out

Submit this job using the swarm command.

swarm -f TEMPLATE.swarm [-g #] [-t #] --module R/3.5
where
-g # Number of Gigabytes of memory required for each process (1 line in the swarm command file)
-t # Number of threads/CPUs required for each process (1 line in the swarm command file).
--module TEMPLATE Loads the TEMPLATE module for each subjob in the swarm
Rswarm
top

Rswarm is a utility to create a series of R input files from a single R (master) template file with different output filenames and with unique random number generator seeds. It will simultaneously create a swarm command file that can be used to submit the swarm of R jobs. Rswarm was originally developed by Lori Dodd and Trevor Reeve with modifications by the Biowulf staff.

Say, for example, that the goal of a simulation study is to evaluate properties of the t-test. The function "sim.fun" in file "sim.R" below repeatedly generates random normal data with a given mean, performs a one sample t-test (i.e. testing if the mean is different from 0), and records the p-values.

#######################################
# n.samp:  size of samples generated for each simulation
# mu:      mean
# sd:      standard deviation
# nsim:    the number of simulations
# output1: output table
# seed:    the seed for set.seed
#######################################
sim.fun <- function(n.samp=100, mu=0, sd=1, n.sim, output1, seed){

    set.seed(seed)

    p.values <- c()
    for (i in 1:n.sim){
        x <- rnorm(n.samp, mean=mu, sd=sd)
        p.values <- c(p.values, t.test(x)$p.value)
    }
    saveRDS(p.values, file=output1)
}

To use Rswarm, create a wrapper script similar to the following ("rfile.R")

source("sim.R")
sim.fun(n.sim=DUMX, output1="DUMY1",seed=DUMZ)

using the the dummy variables which will be replaced by Rswarm.

Dummy variableReplaced with
DUMXNumber of simulations to be specified in each replicate file
DUMY1Output file 1
DUMY2Output file 2 (optional)
DUMZRandom seed

To swarm this code, we need replicates of the rfile.R file, each with a different seed and different output file. The Rswarm utility will create the specified number of replicates, supply each with a different seed (from an external file containing seed numbers), and create unique output files for each replicate. Note, that we allow for you to specify the number of simulations within each file, in addition to specifying the number of replicates.

For example, the following Rswarm command at the Biowulf prompt will create 2 replicate files, each specifying 50 simulations, a different seed from a file entitled, "seedfile.txt," and unique output files.

[user@biowulf]$ ls -lh
total 8.0K
-rw-r--r-- 1 user group  63 Apr 25 12:34 rfile.R
-rw-r--r-- 1 user group 564 Apr 25 12:15 seedfile.txt
-rw-r--r-- 1 user group 547 Apr 25 12:04 sim.R
[user@biowulf]$ head -n2 seedfile.txt
24963
27507
[user@biowulf]$ Rswarm --rfile=rfile.R --sfile=seedfile.txt --path=. \
    --reps=2 --sims=50 --start=0 --ext1=.rds
The template file is rfile.R
The seed file is seedfile.txt
The path is .
The number of replicates desired is 2
The number of sims per file is 50
The starting file number is 0+1
The extension for output files 1 is .rds
The extension for output files 2 is .std.txt
Is this correct (y or n)? : y
Creating file number 1: ./rfile1.R with output ./rfile1.rds ./rfile1.std.txt and seed 24963
Creating file number 2: ./rfile2.R with output ./rfile2.rds ./rfile2.std.txt and seed 27507
[user@biowulf]$ ls -lh
total 16K
-rw-r--r-- 1 user group  69 Apr 25 12:39 rfile1.R
-rw-r--r-- 1 user group  69 Apr 25 12:39 rfile2.R
-rw-r--r-- 1 user group  63 Apr 25 12:34 rfile.R
-rw-r--r-- 1 user group  50 Apr 25 12:39 rfile.sw
-rw-r--r-- 1 user group 564 Apr 25 12:15 seedfile.txt
-rw-r--r-- 1 user group 547 Apr 25 12:04 sim.R
[user@biowulf]$ cat rfile1.R
source("sim.R")
sim.fun(n.sim=50, output1="./rfile1.rds",seed=24963)
[user@biowulf]$ cat rfile2.R
source("sim.R")
sim.fun(n.sim=50, output1="./rfile2.rds",seed=27507)
[user@biowulf]$ cat rfile.sw
R --no-echo --no-restore --no-save < ./rfile1.R
R --no-echo --no-restore --no-save < ./rfile2.R
[user@biowulf]$ swarm -f rfile.sw --time=10 --partition=quick --module R
199110
[user@biowulf]$ ls -lh *.rds
-rw-r--r-- 1 user group 445 Apr 25 12:52 rfile1.rds
-rw-r--r-- 1 user group 445 Apr 25 12:52 rfile2.rds

Full Rswarm usage:

Usage: Rswarm [options]
   --rfile=[file]   (required) R program requiring replication
   --sfile=[file]   (required) file with generated seeds, one per line
   --path=[path]    (required) directory for output of all files
   --reps=[i]       (required) number of replicates desired
   --sims=[i]       (required) number of sims per file
   --start=[i]      (required) starting file number
   --ext1=[string]    (optional) file extension for output file 1
   --ext2=[string]    (optional) file extension for output file 2`
   --help, -h         print this help text

Note that R scripts can be written to take a random seed as a command line argument or derive it from the environment variable SLURM_ARRAY_TASK_ID to achieve an equivalent result.

Using the parallel package
top

The R parallel package provides functions for parallel execution of R code on machines with multiple CPUs. Unlike other parallel processing methods, all jobs share the full state of R when spawned, so no data or code needs to be initialized if it was loaded before starting worker processes. The actual spawning is very fast as well since no new R instance needs to be started.

Detecting the number of CPUs

Parallel includes the dectectCores function which is often used to automatically detect the number of available CPUs. However, it always reports all CPUs available on a node irrespective of how many CPUs were allocated to the job. This is not the desired behavior for batch jobs or sinteractive sessions. Instead, please use the availableCores() function from the future (or parallelly for R >= 4.0.3) package which correctly returns the number of allocated CPUs:

parallelly::availableCores() # for R >= 4.0.3
# or
future::availableCores()

Or, if you prefer, you could also write your own detection function similar to the following example

detectBatchCPUs <- function() { 
    ncores <- as.integer(Sys.getenv("SLURM_CPUS_PER_TASK")) 
    if (is.na(ncores)) { 
        ncores <- as.integer(Sys.getenv("SLURM_JOB_CPUS_PER_NODE")) 
    } 
    if (is.na(ncores)) { 
        return(2)
    } 
    return(ncores) 
}

Random number generation

The state of the random number generator in each worker process has to be carefully considered for any parallel workloads. See the help for mcparallel and the parallel package documentation for more details.

Example 1: mclapply

The mclapply() function calls lapply() in parallel, so that the first two arguments to mclapply() are exactly the same as for lapply(). Except the mc.cores argument needs to be specified to split the computatation across multiple CPUs on the same node. In most cases mc.cores should be equal to the number of allocated CPUs.

> ncpus <- parallelly::availableCores()
> options(mc.cores = ncpus) # set a global option for parallel packages
# Then run mclapply() 
> mclapply(X, FUN, ..., mc.cores = ncpus)

Performance comparision between lapply() and mclapply():

> library(parallel)
> ncpus <- parallelly::availableCores()
> N <- 10^6
> system.time(x<-lapply(1:N, function(i) {rnorm(300)}))
##   user  system elapsed
## 36.588   1.375  38.053
> system.time(x<-mclapply(1:N, function(i) {rnorm(300)},mc.cores = ncpus)) #Test on a phase5 node with ncpus=12
##   user  system elapsed
## 11.587  14.547  13.684

In this example, using 12 CPUs with mclapply() only reduced runtime by only 2.8 fold compared to running on a single CPU. Under ideal conditions, the reduction would have been expected to be 12 fold. This means the work done per CPU was less in the parallel case than in the sequential (single CPU) case. This is called parallel efficiency. In this example the efficiency would have been sequential CPU time / parallel CPU time = (38.1 * 1) / (13.7 * 12) = 23%.

Parallel jobs should aim for an efficiency of 70-80%. Because parallel algorithms rarely scale ideally to multiple CPUs we highly recommend performing scaling test before running programs in parallel. To better optimize the usage of mclapply(), we benchmarked the performance of mclappy() with 2-32 CPUs and compared their efficiency:

The code used for benchmark was:

> library(parallel)
> library(microbenchmark)
> N <- 10^5
# benchmark the performance with 2-32 CPUs for 20 times
> for (n in c(2,4,6,8,16,32)) {
microbenchmark(mctest = mclapply(1:N, function(i) {rnorm(30000)},mc.cores=n),times = 20)
}

As show in the figure, this particular mclapply() should be run with no more than 6 CPUs to ensure a higher than 70% of efficiency. This may be different for your code and should be tested for each type of workload. Note that memory usage increases with more CPUs are used which makes it even more important to not allocate more CPUs than necessary.

Example 2: foreach

A very convenient way to do parallel computations is provided by the foreach package. Here is a simple example (copied from this blog post)

> library(foreach)
> library(doParallel)
> library(doMC)
> registerDoMC(cores=future::availableCores())
> max.eig <- function(N, sigma) {
     d <- matrix(rnorm(N**2, sd = sigma), nrow = N)
     E <- eigen(d)$values
     abs(E)[[1]] } 
> library(rbenchmark)
> benchmark(
     foreach(n = 1:100) %do% max.eig(n, 1),
     foreach(n = 1:100) %dopar% max.eig(n, 1) )

##          test                             replications elapsed relative user.self sys.self user.child sys.child
##1    foreach(n = 1:100) %do% max.eig(n, 1)          100  32.696    3.243 32.632   0.059      0.000      0.00
##2 foreach(n = 1:100) %dopar% max.eig(n, 1)          100  10.083    1.000 3.037    3.389     43.417     10.73
>                                                                       

Note that with 12 CPUs we got a speedup of only 3.2 relative to sequential resulting in a low parallel efficiency. Another cautionary tale to carefully test scaling of parallel code.

A second way to run foreach in parallel:

> library(doParallel)
> cl <- makeCluster(future::availableCores())
> registerDoParallel(cl) 
 # parallel command
> ...
 # stop cluster
> stopCluster(cl)

What if we increased the number of tasks and the size of the largest matrix (i.e. more work per task)? In the example above that means increasing the i in foreach(n=1:i) using a fixed number of CPUs (32 in this case). We then calculated the speedup relative to execution on 2 CPUs:

If parallelism was 100% efficient, the speedup would be 16-fold. We recommend running jobs at 70% parallel efficiency or better which would correspond to a 11-fold speedup in this case (blue horizontal line). In this example, 70% efficiency is reached at i > 300. That means on biowulf you should only run this code on 32 CPUs if for i > 300.

How does the code perform with different numbers of CPUs for i = 500. Based on the results shown below, this code should be run with no more than 32 CPUs to ensure that efficiency is better than 70%.

Using the BiocParallel package
top

The R BiocParallel provides modified versions and novel implementation of functions for parallel evaluation, tailored to use with Bioconductor objects. Like the parallel package, it is not aware of slurm allocations and will therefore, by default, try to use parallel::detectCores() - 2 CPUs, which is all but 2 CPUs installed on a compute node irrespective of how many CPUs have been allocated to a job. That will lead to overloaded jobs and very inefficient code. You can verify this by checking on the registered backends after allocating an interactive session with 2 CPUs:

> library(BiocParallel)
> registered()
$MulticoreParam
class: MulticoreParam
  bpisup: FALSE; bpnworkers: 54; bptasks: 0; bpjobname: BPJOB
  bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
  bptimeout: 2592000; bpprogressbar: FALSE
  bpRNGseed: 
  bplogdir: NA
  bpresultdir: NA
  cluster type: FORK
[...snip...]

So the default backend (top of the registered stack) would use 54 workers on 2 CPUs. The default backend can be changed with

> options(MulticoreParam=quote(MulticoreParam(workers=future::availableCores())))
> registered()
$MulticoreParam
class: MulticoreParam
  bpisup: FALSE; bpnworkers: 2; bptasks: 0; bpjobname: BPJOB
  bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
[...snip..]

or

> register(MulticoreParam(workers = future::availableCores()), default=TRUE)

Alternatively, a param object can be passed to BiocParallel functions.

Implicit multithreading
top

R can do implicit multithreading when using a subset of optimized functions in the library or functions that take advantage of parallelized routines in the lower level math libraries.

The function crossprod(m) which is equivalent to calculating t(m) %*% m, for example, makes use of implicit parallelism in the underlying math libraries and can benefit from using more than one thread. The number of threads used by such functions is regulated by the environment variable OMP_NUM_THREADS, which the R module sets automatically when loaded as part of a batch or interactive job. Here is the runtime of this function with different values for OMP_NUM_THREADS:

crossprod benchmark

The code used for this benchmark was

# this file is benchmark2.R
runs <- 3
o <- 2^13
b <- 0

for (i in 1:runs) {
  a <- matrix(rnorm(o*o), o, o)
  invisible(gc())
  timing <- system.time({
    b <- crossprod(a)		# equivalent to: b <- t(a) %*% a
  })[3]
  cat(sprintf("%f\n", timing))
}

And was called with

node$ module load R/3.5
node$ OMP_NUM_THREADS=1 Rscript benchmark2.R
node$ OMP_NUM_THREADS=2 Rscript benchmark2.R
...
node$ OMP_NUM_THREADS=32 Rscript benchmark2.R

From within a job that had been allocated 32 CPUs.

Notes:

There appears to also be another level of parallelism within the R libraries. One function that takes advantage of this is the dist function. The level of parallelism allowed with this mechanism seems to be set with two internal R functions (setMaxNumMathThreads and setNumMathThreads). Note that this is a distinct mechanism - i.e. setting OMP_NUM_THREADS has no impact on dist and setMaxNumMathThreads has no impact on the performance of crossprod. Here is the performance of dist with different numbers of threads:

dist benchmark

The timings for this example were created with

# this file is benchmark1.R
rt <- data.frame()
o <- 2^12
m <- matrix(rnorm(o*o), o, o)
for (nt in c(1, 2, 4, 8, 16, 32)) {
    .Internal(setMaxNumMathThreads(nt)) 
    .Internal(setNumMathThreads(nt))
    res <- system.time(d <- dist(m))
    rt <- rbind(rt, c(nt, o, res[3]))
}
colnames(rt) <- c("threads", "order", "elapsed")
write.csv(rt, file="benchmark1.csv", row.names=F)

This was run within an allocation with 32 CPUs with

node$ OMP_NUM_THREADS=1 Rscript benchmark1.R

The same notes about benchmarking as above apply. Also note that there is very little documentation about this to be found online.

R MPI jobs
top

Our R installations include the Rmpi and pbdMPI interfaces to MPI (OpenMPI in our case). R/MPI code can be run as batch jobs or from an sinteractive session with mpiexec or srun --mpi=pmix. Running MPI code from an interactive R session is currently not supported.

The higher level snow MPI cluster interface is currently not supported. However, the doMPI parallel backend for foreach is supported.

See our MPI docs for more detail

Example Rmpi code

This is a lower level Rmpi script

# this script is test1.r
library(Rmpi)
id <- mpi.comm.rank(comm=0)
np <- mpi.comm.size (comm=0)
hostname <- mpi.get.processor.name()

msg <- sprintf ("Hello world from task %03d of %03d, on host %s \n", id , np , hostname)
cat(msg)

invisible(mpi.barrier(comm=0))
invisible(mpi.finalize())

It can be submitted as a batch job with the following script:

#! /bin/bash
# this script is test1.sh

module load R/4.1.0 || exit 1
srun --mpi=pmix Rscript test1.r
## or
# mpiexec Rscript test1.r

which would be submitted with

[user@biowulf]$ sbatch --ntasks=4 --nodes=2 --partition=multinode test1.sh

And would generate output similar to

Hello world from task 000 of 004, on host cn4101
Hello world from task 001 of 004, on host cn4102
Hello world from task 002 of 004, on host cn4103
Hello world from task 003 of 004, on host cn4104

Here is a Rmpi example with actual collective communication though still very simplistic. This script derives an estimate for π in each task, gathers the results in task 0 and repeats this process n times to arrive at a final estimate:

# this is test2.r
library(Rmpi)

# return a random number from /dev/urandom
readRandom <- function() {
  dev <- "/dev/urandom"
  rng = file(dev,"rb", raw=TRUE)
  n = readBin(rng, what="integer", 1) # read some 8-byte integers 
  close(rng)
  return( n[1] ) # reduce range and shift
}

pi_dart <- function(i) {
    est <- mean(sqrt(runif(throws)^2 + runif(throws)^2) <= 1) * 4
    return(est)
}

id <- mpi.comm.rank(comm=0)
np <- mpi.comm.size (comm=0)
hostname <- mpi.get.processor.name()
rngseed <- readRandom()
cat(sprintf("This is task %03d of %03d, on host %s with seed %i\n", 
    id , np , hostname, rngseed))
set.seed(rngseed)

throws <- 1e7
rounds <- 400
pi_global_sum = 0.0
for (i in 1:rounds) {
    # each task comes up with its own estimate of pi
    pi_est <- mean(sqrt(runif(throws)^2 + runif(throws)^2) <= 1) * 4
    # then we gather them all in task 0; type=2 means that the values are doubles
    pi_task_sum <- mpi.reduce(pi_est, type=2, op="sum", dest=0, comm=0)
    if (id == 0) {
        # task with id 0 then uses the sum to calculate an avarage across the
        # tasks and adds that to the global sum
        pi_global_sum <- pi_global_sum + (pi_task_sum / np)
    }
}

# when we're done, the task id 0 averages across all the rounds and prints the result
if (id == 0) {
    cat(sprintf("Real value of pi = %.10f\n", pi))
    cat(sprintf("  Estimate of pi = %.10f\n", pi_global_sum / rounds))
}

invisible(mpi.finalize())

Submitting this script with a batch script similar to the first example results in output like this:

This is task 000 of 004, on host cn4101 with seed -303950071
This is task 001 of 004, on host cn4102 with seed -1074523673
This is task 002 of 004, on host cn4103 with seed 788983269
This is task 003 of 004, on host cn4104 with seed -922785662
Real value of pi = 3.1415926536
  Estimate of pi = 3.1415935438

doMPI example code

doMPI provides an MPI backend for the foreach package. Here is a simple hello-world-ish doMPI example. Note that in my testing the least issues were encountered when the foreach loops were run from the first rank of the MPI job.

suppressMessages({
    library(Rmpi)
    library(doMPI)
    library(foreach)
})

myrank <- mpi.comm.rank(comm=0)
cl <- doMPI::startMPIcluster()
registerDoMPI(cl)
if (myrank == 0) {
    cat("-------------------------------------------------\n")
    cat("== This is rank 0 running the foreach loops ==\n")
    

    x <- foreach(i=1:16, .combine="c") %dopar% {
        id <- mpi.comm.rank(comm=0)
        np <- mpi.comm.size (comm=0)
        hostname <- mpi.get.processor.name()
        sprintf ("Hello world from process %03d of %03d, on host %s \n", id , np , hostname)
    }
    print(x)
    
    x <- foreach(i=1:200, .combine="c") %dopar% {
        sqrt(i)
    }

    cat("-------------------------------------------------\n")
    print(x)

    x <- foreach(i=1:16, .combine="cbind") %dopar% {
        set.seed(i)
        rnorm(3)
    }


    cat("-------------------------------------------------\n")
    print(x)

}
closeCluster(cl)
mpi.quit(save="no")

Do not specify the number of tasks to run. doMPI clusters will hang during shutdown if doMPI has to spawn worker processes. Instead, let mpiexec start the processes and then doMPI will wire up a main process and workers from the existing processes. Note that the startMPIcluster function has to be called early in the script for that reason.

Run a shiny app on biowulf
top

Shiny apps can be run on biowulf for a single user. Since they require tunneling a running shiny app cannot be shared with other users. However, if the code for the app is accessible to different users they each can run an ephemeral shiny app on biowulf. We will use the following example application:

## this file is 01_hello.r
library(shiny)


ui <- fluidPage(

  # App title ----
  titlePanel("Hello Shiny!"),
  # Sidebar layout with input and output definitions ----
  sidebarLayout(
    # Sidebar panel for inputs ----
    sidebarPanel(
      # Input: Slider for the number of bins ----
      sliderInput(inputId = "bins",
                  label = "Number of bins:",
                  min = 1,
                  max = 50,
                  value = 30)
    ),
    # Main panel for displaying outputs ----
    mainPanel(
      # Output: Histogram ----
      plotOutput(outputId = "distPlot")
    )
  )
)

# Define server logic required to draw a histogram ----
server <- function(input, output) {
  # Histogram of the Old Faithful Geyser Data ----
  # with requested number of bins
  # This expression that generates a histogram is wrapped in a call
  # to renderPlot to indicate that:
  #
  # 1. It is "reactive" and therefore should be automatically
  #    re-executed when inputs (input$bins) change
  # 2. Its output type is a plot
  output$distPlot <- renderPlot({
    x    <- faithful$waiting
    bins <- seq(min(x), max(x), length.out = input$bins + 1)
    hist(x, breaks = bins, col = "#007bc2", border = "white",
         xlab = "Waiting time to next eruption (in mins)",
         main = "Histogram of waiting times")

    })
}

# which port to run on
port <- tryCatch(
  as.integer(Sys.getenv("PORT1", "none")),
  error = function(e) {
    cat("Please remember to use --tunnel to run a shiny app")
    cat("See https://hpc.nih.gov/docs/tunneling/")
    stop()
  }
)

# run the app
shinyApp(
  ui,
  server,
  options = list(port=port, launch.browser=F, host="127.0.0.1")
)

Start an sinteractive session with a tunnel

[user@biowulf]$ sinteractive --cpus-per-task=2 --mem=6g --gres=lscratch:10 --tunnel
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job
[user@cn3144 ~]$ module load R
[user@cn3144 ~]$ Rscript 01_hello.r

Listening on http://127.0.0.1:34239

After you set up your tunnel you can use the URL above to access the shiny app.

Notes for individual packages
top

h2o

h2o is a machine learning package written in java. The R interface starts a java h2o instance with a given number of threads and then connects to it through http. This fails on compute nodes if the http proxy variables are set. Therefore it is necessary to unset http_proxy before using h2o:

[user@biowulf]$ sinteractive --cpus-per-task=4 --mem=20g --gres=lscratch:10
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job
[user@cn3144 ~]$ module load R/4.2
[user@cn3144 ~]$ unset http_proxy
[user@cn3144 ~]$ R
R version 4.2.0 (2022-04-22) -- "Vigorous Calisthenics"
Copyright (C) 2022 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

[...snip...]
> library(h2o)
> h2o.init(ip='localhost', nthreads=future::availableCores(), max_mem_size='12g')
H2O is not running yet, starting it now...

Note:  In case of errors look at the following log files:
    /tmp/RtmpVdW92Y/h2o_user_started_from_r.out
    /tmp/RtmpVdW92Y/h2o_user_started_from_r.err

openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)

Starting H2O JVM and connecting: . Connection successful!

R is connected to the H2O cluster:
    H2O cluster uptime:         1 seconds 683 milliseconds
    H2O cluster timezone:       America/New_York
    H2O data parsing timezone:  UTC
    H2O cluster version:        3.36.1.2
    H2O cluster version age:    3 months and 20 days !!!
    H2O cluster name:           H2O_started_from_R_user_ywu882
    H2O cluster total nodes:    1
    H2O cluster total memory:   10.64 GB
    H2O cluster total cores:    4
    H2O cluster allowed cores:  4
    H2O cluster healthy:        TRUE
    H2O Connection ip:          localhost
    H2O Connection port:        54321
    H2O Connection proxy:       NA
    H2O Internal Security:      FALSE
    R Version:                  R version 4.2.0 (2022-04-22)

>

dyno

dyno is a meta package that installs several other packages from the dynvers (https://github.com/dynverse). It includes some cran packages and some packages only available on github. We generally don't install any new github-only R packages any more so here are the instructions for installing this as a user.

Installation
###
### 1. install with the default dependent packages
###
[user@biowulf]$ sinteractive --gres=lscratch:5
salloc.exe: Pending job allocation 46116226
salloc.exe: job 46116226 queued and waiting for resources
salloc.exe: job 46116226 has been allocated resources
salloc.exe: Granted job allocation 46116226
salloc.exe: Waiting for resource configuration
salloc.exe: Nodes cn3144 are ready for job
[user@cn3144 ~]$ module load R/4.2
[user@cn3144 ~]$ R -q --no-save --no-restore -e 'devtools::install_github("dynverse/dyno")'

###
### 2. install a pached version of babelwhale
###
[user@cn3144 ~]$ git clone https://github.com/dynverse/babelwhale.git
[user@cn3144 ~]$ patch -p0 <<'__EOF__'
--- babelwhale/R/run.R.orig     2021-07-16 20:58:26.563714000 -0400
+++ babelwhale/R/run.R  2021-07-16 20:58:26.483721000 -0400
@@ -122,6 +122,8 @@
         environment_variables %>% gsub("^.*=", "", .),
         environment_variables %>% gsub("^(.*)=.*$", "SINGULARITYENV_\\1", .)
       ),
+      "http_proxy" = Sys.getenv("http_proxy"),
+      "https_proxy" = Sys.getenv("https_proxy"),
       "SINGULARITY_TMPDIR" = tmpdir,
       "SINGULARITY_CACHEDIR" = config$cache_dir,
       "PATH" = Sys.getenv("PATH") # pass the path along
__EOF__

[user@cn3144 ~]$ R CMD INSTALL babelwhale

###
### 3. Create a configuration that uses a dedicated singularity cache somewhere
###    outside the home directory. In this example using `/data/$USER/dynocache`
###

[user@cn3144 ~]$ R -q --no-save --no-restore <<'__EOF__'
config <- babelwhale::create_singularity_config(
  cache_dir = "/data/$USER/dynocache"
)
babelwhale::set_default_config(config, permanent = TRUE)
__EOF__
Notes:
Documentation
top