Home

Awesome

SciMLBenchmarks.jl: Benchmarks for Scientific Machine Learning (SciML) and Equation Solvers

Join the chat at https://julialang.zulipchat.com #sciml-bridged Global Docs

Build status

ColPrac: Contributor's Guide on Collaborative Practices for Community Packages SciML Code Style

SciMLBenchmarks.jl holds webpages, pdfs, and notebooks showing the benchmarks for the SciML Scientific Machine Learning Software ecosystem, including:

The SciML Bench suite is made to be a comprehensive open source benchmark from the ground up, covering the methods of computational science and scientific computing all the way to AI for science.

Rules: Optimal, Fair, and Reproducible

These benchmarks are meant to represent good optimized coding style. Benchmarks are preferred to be run on the provided open benchmarking hardware for full reproducibility (though in some cases, such as with language barriers, this can be difficult). Each benchmark is documented with the compute devices used along with package versions for necessary reproduction. These benchmarks attempt to measure in terms of work-precision efficiency, either timing with an approximately matching the error or building work-precision diagrams for direct comparison of speed at given error tolerances.

If any of the code from any of the languages can be improved, please open a pull request.

Results

To view the results of the SciML Benchmarks, go to benchmarks.sciml.ai. By default, this will lead to the latest tagged version of the benchmarks. To see the in-development version of the benchmarks, go to https://benchmarks.sciml.ai/dev/.

Static outputs in pdf, markdown, and html reside in SciMLBenchmarksOutput.

Citing

To cite the SciML Benchmarks, please cite the following:

@article{rackauckas2019confederated,
  title={Confederated modular differential equation APIs for accelerated algorithm development and benchmarking},
  author={Rackauckas, Christopher and Nie, Qing},
  journal={Advances in Engineering Software},
  volume={132},
  pages={1--6},
  year={2019},
  publisher={Elsevier}
}

@article{DifferentialEquations.jl-2017,
 author = {Rackauckas, Christopher and Nie, Qing},
 doi = {10.5334/jors.151},
 journal = {The Journal of Open Research Software},
 keywords = {Applied Mathematics},
 note = {Exported from https://app.dimensions.ai on 2019/05/05},
 number = {1},
 pages = {},
 title = {DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential Equations in Julia},
 url = {https://app.dimensions.ai/details/publication/pub.1085583166 and http://openresearchsoftware.metajnl.com/articles/10.5334/jors.151/galley/245/download/},
 volume = {5},
 year = {2017}
}

Current Summary

The following is a quick summary of the benchmarks. These paint broad strokes over the set of tested equations and some specific examples may differ.

Non-Stiff ODEs

Stiff ODEs

Dynamical ODEs

Non-Stiff SDEs

Stiff SDEs

Non-Stiff DDEs

Stiff DDEs

Parameter Estimation

Interactive Notebooks

To generate the interactive notebooks, first install the SciMLBenchmarks, instantiate the environment, and then run SciMLBenchmarks.open_notebooks(). This looks as follows:

]add SciMLBenchmarks#master
]activate SciMLBenchmarks
]instantiate
using SciMLBenchmarks
SciMLBenchmarks.open_notebooks()

The benchmarks will be generated at your pwd() in a folder called generated_notebooks.

Note that when running the benchmarks, the packages are not automatically added. Thus you will need to add the packages manually or use the internal Project/Manifest tomls to instantiate the correct packages. This can be done by activating the folder of the benchmarks. For example,

using Pkg
Pkg.activate(joinpath(pkgdir(SciMLBenchmarks),"benchmarks","NonStiffODE"))
Pkg.instantiate()

will add all of the packages required to run any benchmark in the NonStiffODE folder.

Contributing

All of the files are generated from the Weave.jl files in the benchmarks folder. The generation process runs automatically, and thus one does not necessarily need to test the Weave process locally. Instead, simply open a PR that adds/updates a file in the "benchmarks" folder and the PR will generate the benchmark on demand. Its artifacts can then be inspected in the Buildkite as described below before merging. Note that it will use the Project.toml and Manifest.toml of the subfolder, so any changes to dependencies requires that those are updated.

Reporting Bugs and Issues

Report any bugs or issues at the SciMLBenchmarks repository.

Inspecting Benchmark Results

To see benchmark results before merging, click into the BuildKite, click onto Artifacts, and then investigate the trained results.

Manually Generating Files

All of the files are generated from the Weave.jl files in the benchmarks folder. To run the generation process, do for example:

]activate SciMLBenchmarks # Get all of the packages
using SciMLBenchmarks
SciMLBenchmarks.weave_file(joinpath(pkgdir(SciMLBenchmarks),"benchmarks","NonStiffODE"),"linear_wpd.jmd")

To generate all of the files in a folder, for example, run:

SciMLBenchmarks.weave_folder(joinpath(pkgdir(SciMLBenchmarks),"benchmarks","NonStiffODE"))

To generate all of the notebooks, do:

SciMLBenchmarks.weave_all()

Each of the benchmarks displays the computer characteristics at the bottom of the benchmark. Since performance-necessary computations are normally performed on compute clusters, the official benchmarks use a workstation with an AMD EPYC 7502 32-Core Processor @ 2.50GHz to match the performance characteristics of a standard node in a high performance computing (HPC) cluster or cloud computing setup.