Skip to content

Benchmarks

This page documents performance comparisons between EpiBranch.jl and equivalent R packages. The benchmark scripts are in benchmarks/ and can be run locally to reproduce results on your own hardware.

These are indicative timings, not rigorous benchmarks. Both the R and Julia implementations are under active development, so numbers will change over time.

How to run

Julia benchmarks (requires BenchmarkTools and StableRNGs in your default environment):

bash
julia benchmarks/benchmark_julia.jl

R benchmarks (requires epichains and ringbp):

bash
Rscript benchmarks/benchmark_r.R
Rscript benchmarks/benchmark_r_ringbp.R

Chain simulation (vs epichains)

Simulating 1000 transmission chains to completion, comparing EpiBranch.jl with R's epichains package.

ScenarioR (epichains)Julia (EpiBranch)
1000 chains, Poisson(0.9)22.5 ms1.6 ms
1000 chains, NegBin(0.8, 0.5)12.6 ms1.0 ms
1000 chains + generation time31.0 ms1.9 ms
Chain statistics0.48 ms0.27 ms
Analytical log-likelihood0.15 ms< 0.001 ms

Intervention scenarios (vs ringbp)

Simulating 500 outbreaks with NegBin(2.5, 0.16) offspring, isolation, and contact tracing, capped at 5000 cases. Comparing EpiBranch.jl with R's ringbp package.

ScenarioR (ringbp)Julia (EpiBranch)
No interventions9,893 ms707 ms
50% contact tracing10,374 ms707 ms
50% tracing + quarantine9,319 ms707 ms

The Julia column uses the same scenario (isolation + 50% contact tracing, scenario 7 in benchmark_julia.jl). The ringbp scenarios differ slightly in parameterisation (incubation-linked generation time, presymptomatic transmission fraction), so these are order-of-magnitude comparisons rather than like-for-like.

Other benchmarks

ScenarioJulia (EpiBranch)
Line list generation (200 cases)0.008 ms
NegBin fit from 1000 offspring counts0.71 ms

No direct R comparison is included for line list generation (simulist requires epiparameter database setup) or offspring fitting.

Notes

  • Julia timings exclude compilation (measured after warm-up using BenchmarkTools.jl)

  • R timings use microbenchmark

  • All timings are medians from multiple runs

  • Hardware differences will affect absolute numbers; ratios are more informative

  • Neither implementation is specifically optimised for speed

  • Last run: April 2026