Skip to content

germandiagogomez/the-cpp-abstraction-penalty

Repository files navigation

Benchmarks

If you just want to see the benchmark results just go to the table here here.

Purpose

This is a set of benchmarks in C++ that tries to compare “raw/C-ish code” or old C++ style implementations vs “library-based, modern C++” implementations of some algorithms and compares their execution time.

For every benchmark, two implementations are introduced:

  • raw implementation.
  • modern C++ implementation.

The goal is to put them front to front to see how they perform against each other, on a per-compiler basis.

Plots are generated, grouping, per-compiler, the two versions put front to front.

I am particularly interested in measuring the abstraction penalty incurred by the use of a C++ vs C-ish plain approaches when compiling programs with optimization, since one of the goals of C++ is the zero-overhead principle.

My first experiment makes use of Eric Niebler’s ranges library. There is a standard C++ proposal for inclusion based on this work.

Benchmark style and guidelines

The scope of this benchmark set is very targeted: I want to show how typical, older-style or C-ish code or old-style C++ code performs against idiomatic modern C++ code.

I want to limit the benchmarks to code to focus on older vs newer styles. One benchmarks should represent and older way of doing something, and the modern one should represent the supposedly better, as in safer or more idiomatic way of doing something compared to the previous benchmark.

It will be considered cheating to write unconventional and deeply worked out code in the benchmarks just to beat one benchmark against another. For example, using carefully-crafted SSE intrinsics is not something acceptable. Using `std::myalgo(my_policy, beg, end)` is not cheating, because it is easy to write even if internally could use SSE or OpenMP.

Contributions are welcome.

Suggestions and ideas for new benchmarks are welcome as well.

I will reserve for myself the right to accept or reject a benchmark to the set of benchmarks, with the hope of keeping it focused. :).

Compile and run the benchmarks

So you want to run the benchmark yourself in your computer…

Prerequisites:

git clone --recursive https://github.com/germandiagogomez/the-cpp-abstraction-penalty.git
cd the-cpp-abstraction-penalty

# Configure the superproject that will run the benchmarks suite
meson project-benchmarks-suite build-benchmarks-suite

# Run the benchmarks
ninja -C build-benchmarks-suite run_benchmarks_suite

# Generate the plots in directory build-all/plots (requires gnuplot)
ninja -C build-benchmarks-suite generate_plots

This will do the following:

  1. Build the binaries for your compilers.
  2. Run the binaries for the benchmark.
  3. Put, for each benchmark, a png file in build-all/plots directory that you can open when done to see the chart.

How to contribute a new benchmark

TODO

Getting your benchmark to work

TODO

Benchmarks results

Compiler filesConfiguration detailsBenchmark results
gcc clanggcc clangResults
msvc2019 mingw-w64msvc2019 mingw-w64Results