A suite of benchmarks for CPU and GPU performance of the most popular high-performance libraries for Python :rocket:

Overview

DOI

HPC benchmarks for Python

This is a suite of benchmarks to test the sequential CPU and GPU performance of various computational backends with Python frontends.

Specifically, we want to test which high-performance backend is best for geophysical (finite-difference based) simulations.

Contents

FAQ

Why?

The scientific Python ecosystem is thriving, but high-performance computing in Python isn't really a thing yet. We try to change this with our pure Python ocean simulator Veros, but which backend should we use for computations?

Tremendous amounts of time and resources go into the development of Python frontends to high-performance backends, but those are usually tailored towards deep learning. We wanted to see whether we can profit from those advances, by (ab-)using these libraries for geophysical modelling.

Why do the benchmarks look so weird?

These are more or less verbatim copies from Veros (i.e., actual parts of a physical model). Most earth system and climate model components are based on finite-difference schemes to compute derivatives. This can be represented in vectorized form by index shifts of arrays (such as 0.5 * (arr[1:] + arr[:-1]), the first-order derivative of arr at every point). The most common index range is [2:-2], which represents the full domain (the two outermost grid cells are overlap / "ghost cells" that allow us to shift the array across the boundary).

Now, maths is difficult, and numerics are weird. When many different physical quantities (defined on different grids) interact, things get messy very fast.

Why only test sequential CPU performance?

Two reasons:

  • I was curious to see how good the compilers are without being able to fall back to thread parallelism.
  • In many physical models, it is pretty straightforward to parallelize the model "by hand" via MPI. Therefore, we are not really dependent on good parallel performance out of the box.

Which backends are currently supported?

(not every backend is available for every benchmark)

What is included in the measurements?

Pure time spent number crunching. Preparing the inputs, copying stuff from and to GPU, compilation time, time it takes to check results etc. are excluded. This is based on the assumption that these things are only done a few times per simulation (i.e., that their cost is amortized during long-running simulations).

How does this compare to a low-level implementation?

As a rule of thumb (from our experience with Veros), the performance of a Fortran implementation is very close to that of the Numba backend, or ~3 times faster than NumPy.

Environment setup

For CPU:

$ conda env create -f environment-cpu.yml
$ conda activate pyhpc-bench-cpu

GPU:

$ conda env create -f environment-gpu.yml
$ conda activate pyhpc-bench-gpu

If you prefer to install things by hand, just have a look at the environment files to see what you need. You don't need to install all backends; if a module is unavailable, it is skipped automatically.

Usage

Your entrypoint is the script run.py:

$ python run.py --help
Usage: run.py [OPTIONS] BENCHMARK

  HPC benchmarks for Python

  Usage:

      $ python run.py benchmarks/<BENCHMARK_FOLDER>

  Examples:

      $ taskset -c 0 python run.py benchmarks/equation_of_state

      $ python run.py benchmarks/equation_of_state -b numpy -b jax --device
      gpu

  More information:

      https://github.com/dionhaefner/pyhpc-benchmarks

Options:
  -s, --size INTEGER              Run benchmark for this array size
                                  (repeatable)  [default: 4096, 16384, 65536,
                                  262144, 1048576, 4194304]
  -b, --backend [numpy|cupy|jax|aesara|numba|pytorch|tensorflow]
                                  Run benchmark with this backend (repeatable)
                                  [default: run all backends]
  -r, --repetitions INTEGER       Fixed number of iterations to run for each
                                  size and backend [default: auto-detect]
  --burnin INTEGER                Number of initial iterations that are
                                  disregarded for final statistics  [default:
                                  1]
  --device [cpu|gpu|tpu]          Run benchmarks on given device where
                                  supported by the backend  [default: cpu]
  --help                          Show this message and exit.

Benchmarks are run for all combinations of the chosen sizes (-s) and backends (-b), in random order.

CPU

Some backends refuse to be confined to a single thread, so I recommend you wrap your benchmarks in taskset to set processor affinity to a single core (only works on Linux):

$ conda activate pyhpc-bench-cpu
$ taskset -c 0 python run.py benchmarks/<benchmark_name>

GPU

Some backends use all available GPUs by default, some don't. If you have multiple GPUs, you can set the one to be used through CUDA_VISIBLE_DEVICES, so keep things fair.

Some backends are greedy with allocating memory. On GPU, you can only run one backend at a time (add NumPy for reference):

--device gpu -b $backend -b numpy -s 10_000_000 ... done ">
$ conda activate pyhpc-bench-gpu
$ export CUDA_VISIBLE_DEVICES="0"
$ for backend in jax cupy pytorch tensorflow; do
...    python run benchmarks/<benchmark_name> --device gpu -b $backend -b numpy -s 10_000_000
...    done

Example results

Summary

Equation of state

Isoneutral mixing

Turbulent kinetic energy

Full reports

Conclusion

Lessons I learned by assembling these benchmarks: (your mileage may vary)

  • The performance of JAX is very competitive, both on GPU and CPU. It is consistently among the top implementations on both platforms.
  • Pytorch performs very well on GPU for large problems (slightly better than JAX), but its CPU performance is not great for tasks with many slicing operations.
  • Numba is a great choice on CPU if you don't mind writing explicit for loops (which can be more readable than a vectorized implementation), being slightly faster than JAX with little effort.
  • JAX performance on GPU seems to be quite hardware dependent. JAX performancs significantly better (relatively speaking) on a Tesla P100 than a Tesla K80.
  • If you have embarrasingly parallel workloads, speedups of > 1000x are easy to achieve on high-end GPUs.
  • TPUs are catching up to GPUs. We can now get similar performance to a high-end GPU on these workloads.
  • Tensorflow is not great for applications like ours, since it lacks tools to apply partial updates to tensors (such as tensor[2:-2] = 0.).
  • If you use Tensorflow on CPU, make sure to use XLA (experimental_compile) for tremendous speedups.
  • CuPy is nice! Often you don't need to change anything in your NumPy code to have it run on GPU (with decent, but not outstanding performance).
  • Reaching Fortran performance on CPU for non-trivial tasks is hard :)

Contributing

Community contributions are encouraged! Whether you want to donate another benchmark, share your experience, optimize an implementation, or suggest another backend - feel free to ask or open a PR.

Adding a new backend

Adding a new backend is easy!

Let's assume that you want to add support for a library called speedygonzales. All you need to do is this:

  • Implement a benchmark to use your library, e.g. benchmarks/equation_of_state/eos_speedygonzales.py.

  • Register the benchmark in the respective __init__.py file (benchmarks/equation_of_state/__init__.py), by adding "speedygonzales" to its __implementations__ tuple.

  • Register the backend, by adding its setup function to the __backends__ dict in backends.py.

    A setup function is what is called before every call to your benchmark, and can be used for custom setup and teardown. In the simplest case, it is just

    def setup_speedygonzales(device='cpu'):
        # code to run before benchmark
        yield
        # code to run after benchmark

Then, you can run the benchmark with your new backend:

$ python run.py benchmarks/equation_of_state -b speedygonzales
Comments
  • fastmath

    fastmath

    Hi @dionhaefner, great comparisons, thanks for that! Out of interest. Did you ever try to run numba with fastmath=True; does it make any difference, and if, how much?

    opened by prisae 9
  • turbulent_kinetic_energy returns inconsistent results

    turbulent_kinetic_energy returns inconsistent results

    I am working on https://github.com/dionhaefner/pyhpc-benchmarks/pull/14. The command has inconsistent result output:

    $ python run.py -r 2 -s 1048576 --device cpu -b pytorch benchmarks/turbulent_kinetic_energy/
    
    Using pytorch version 1.13.0.dev20220617+cu113
    Running 3 benchmarks...  [------------------------------------]    0%Error: inconsistent results for size 1048576
    Error: inconsistent results for size 1048576
    Error: inconsistent results for size 1048576
    Running 3 benchmarks...  [####################################]  100%
    
    benchmarks.turbulent_kinetic_energy
    ===================================
    Running on CPU
    
    size          backend     calls     mean      stdev     min       25%       median    75%       max       Δ
    ------------------------------------------------------------------------------------------------------------------
       1,048,576  pytorch            2     0.573     0.028     0.544     0.559     0.573     0.587     0.601     1.000
    
    (time in wall seconds, less is better)
    

    Looks like two consecutive runs will generate inconsistent results for turbulent_kinetic_energy. I guess the root cause is this line: https://github.com/dionhaefner/pyhpc-benchmarks/blob/master/benchmarks/turbulent_kinetic_energy/tke_pytorch.py#L264

    There could be non-deterministic numeric results when running mask = tke[2:-2, 2:-2, -1, taup1] < 0.0

    opened by xuzhao9 5
  • deprecated jax ops: index_update

    deprecated jax ops: index_update

    the call for backend in jax; do python run.py benchmarks/isoneutral_mixing/ --device gpu -b $backend -b numpy; done yields

    dTdz = jax.ops.index_update(
    AttributeError: module 'jax.ops' has no attribute 'index_update'
    

    and indeed index_update is no longer a thing: https://jax.readthedocs.io/en/latest/jax.ops.html

    opened by ilemhadri 1
  • DRAFT: Add Transonic + {Pythran, Cython}

    DRAFT: Add Transonic + {Pythran, Cython}

    Fixes #9

    Notes:

    1. Calling the setup function multiple times in a benchmark should be avoided
    2. Equation of state benchmark was easy to implement
    3. Isoneutral benchmark has some issues -- does not compile yet, despite workaround in ba03d48
    4. TODO: Turbulent kinetic energy benchmark
    opened by ashwinvis 2
  • Compare with TACO Python binding

    Compare with TACO Python binding

    The Tensor Algebra Compiler (https://github.com/tensor-compiler/taco) seems to be good at sparse/dense linear algebra and has Python frontend: http://tensor-compiler.org/docs/pycomputations/index.html

    contributions-welcome 
    opened by learning-chip 1
  • Compare with an MLIR-based stencil DSL

    Compare with an MLIR-based stencil DSL

    This project https://github.com/spcl/open-earth-compiler/ provides a DSL frontend for stencil/PDE programs, and rely on MLIR & LLVM to run on NVIDIA and AMD GPUs. It is not a Python frontend, but can be called from Python I think (see https://arxiv.org/abs/2005.13014)

    contributions-welcome 
    opened by learning-chip 1
  • Compare with DaCe framework?

    Compare with DaCe framework?

    DaCe (https://github.com/spcl/dace) is a parallel computing framework that also support Numpy frontend, similar to JAX and Numba. It runs on CPU/GPU/FPGA. Would be interesting to add it for comparison!

    contributions-welcome 
    opened by learning-chip 4
Releases(v3.0)
  • v3.0(Oct 28, 2021)

    • Theano and Bohrium are dead 💀🦴
    • Aesara replaces Theano on CPU
    • New Pytorch implementation for TKE benchmark
    • Updates of all library versions and a complete re-run of reference results 📈
    Source code(tar.gz)
    Source code(zip)
  • v2.1(Oct 5, 2021)

  • v2.0(Jul 22, 2020)

Owner
Dion Häfner
I do science with Python.
Dion Häfner
A wrapper for webdriver that is a jumping off point for web automation.

Webdriver Automation Plus ===================================== Description: Tests the user can save messages then find them in search and Saved items

1 Nov 08, 2021
Um scraper feito em python que gera arquivos de excel baseados nas tier lists do site LoLalytics.

LoLalytics-scraper Um scraper feito em python que gera arquivos de excel baseados nas tier lists do site LoLalytics. Começando por um único script com

Kevin Souza 1 Feb 19, 2022
buX Course Enrollment Automation

buX automation BRACU - buX course enrollment automation Features: Automatically enroll into multiple courses at a time. Find courses just entering cou

Mohammad Shakib 1 Oct 06, 2022
🎓 Stepik Academy Автоматизация тестирования на Python

🎓 Stepik Academy Автоматизация тестирования на Python Запуск тестов выполняется в командной строке: pytest -v --tb=line --language=en --alluredir=all

Sergey 1 Dec 03, 2021
Compiles python selenium script to be a Window's executable

Problem Statement Setting up a Python project can be frustrating for non-developers. From downloading the right version of python, setting up virtual

Jerry Ng 8 Jan 09, 2023
Connexion-faker - Auto-generate mocks from your Connexion API using OpenAPI

Connexion Faker Get Started Install With poetry: poetry add connexion-faker # a

Erle Carrara 6 Dec 19, 2022
Automating the process of sorting files in my downloads folder by file type.

downloads-folder-automation Automating the process of sorting files in a user's downloads folder on Windows by file type. This script iterates through

Eric Mahasi 27 Jan 07, 2023
Python program that uses pynput to simulate key presses. Probably only works on Windows.

AutoKey Python program that uses pynput to simulate key presses. Probably only works on Windows. Can be used for pretty much whatever you want except

2 Oct 28, 2022
Set your Dynaconf environment to testing when running pytest

pytest-dynaconf Set your Dynaconf environment to testing when running pytest. Installation You can install "pytest-dynaconf" via pip from PyPI: $ pip

David Baumgold 3 Mar 11, 2022
Sixpack is a language-agnostic a/b-testing framework

Sixpack Sixpack is a framework to enable A/B testing across multiple programming languages. It does this by exposing a simple API for client libraries

1.7k Dec 24, 2022
Automatically mock your HTTP interactions to simplify and speed up testing

VCR.py 📼 This is a Python version of Ruby's VCR library. Source code https://github.com/kevin1024/vcrpy Documentation https://vcrpy.readthedocs.io/ R

Kevin McCarthy 2.3k Jan 01, 2023
Doing dirty (but extremely useful) things with equals.

Doing dirty (but extremely useful) things with equals. Documentation: dirty-equals.helpmanual.io Source Code: github.com/samuelcolvin/dirty-equals dir

Samuel Colvin 602 Jan 05, 2023
Photostudio是一款能进行自动化检测网页存活并实时给网页拍照的工具,通过调用Fofa/Zoomeye/360qua/shodan等 Api快速准确查询资产并进行网页截图,从而实施进一步的信息筛查。

Photostudio-红队快速爬取网页快照工具 一、简介: 正如其名:这是一款能进行自动化检测,实时给网页拍照的工具 信息收集要求所收集到的信息要真实可靠。 当然,这个原则是信息收集工作的最基本的要求。为达到这样的要求,信息收集者就必须对收集到的信息反复核实,不断检验,力求把误差减少到最低限度。我

s7ck Team 41 Dec 11, 2022
HTTP client mocking tool for Python - inspired by Fakeweb for Ruby

HTTPretty 1.0.5 HTTP Client mocking tool for Python created by Gabriel Falcão . It provides a full fake TCP socket module. Inspired by FakeWeb Github

Gabriel Falcão 2k Jan 06, 2023
Based on the selenium automatic test framework of python, the program crawls the score information of the educational administration system of a unive

whpu_spider 该程序基于python的selenium自动化测试框架,对某高校的教务系统的成绩信息实时爬取,在检测到成绩更新之后,会通过电子邮件的方式,将更新的成绩以文本的方式发送给用户,可以使得用户在不必手动登录教务系统网站时,实时获取成绩更新的信息。 该程序仅供学习交流,不可用于恶意攻

1 Dec 30, 2021
WEB PENETRATION TESTING TOOL 💥

N-WEB ADVANCE WEB PENETRATION TESTING TOOL Features 🎭 Admin Panel Finder Admin Scanner Dork Generator Advance Dork Finder Extract Links No Redirect H

56 Dec 23, 2022
Sixpack is a language-agnostic a/b-testing framework

Sixpack Sixpack is a framework to enable A/B testing across multiple programming languages. It does this by exposing a simple API for client libraries

1.7k Dec 24, 2022
The Penetration Testers Framework (PTF) is a way for modular support for up-to-date tools.

The PenTesters Framework (PTF) is a Python script designed for Debian/Ubuntu/ArchLinux based distributions to create a similar and familiar distribution for Penetration Testing

trustedsec 4.5k Dec 28, 2022
pytest plugin providing a function to check if pytest is running.

pytest-is-running pytest plugin providing a function to check if pytest is running. Installation Install with: python -m pip install pytest-is-running

Adam Johnson 21 Nov 01, 2022
Subprocesses for Humans 2.0.

Delegator.py — Subprocesses for Humans 2.0 Delegator.py is a simple library for dealing with subprocesses, inspired by both envoy and pexpect (in fact

Amit Tripathi 1.6k Jan 04, 2023