HiveBrain v1.2.0
Get Started
← Back to all entries
patterncppMinor

C++ benchmark v2

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
benchmarkstackoverflowprogramming

Problem

The purpose of the code is to simplify benchmarking of arbitrary functions. The reason I want to benchmark things is that I would like to get a "feeling" of performance. Although the framework leans on synthetic benchmarks, I think it is a good starting point.

After trying to write a benchmarking framework a few times (1, 2) I've decided to nail down the model that I would like to use. Full code with dependencies can be found on this commit.

benchmark_v2:

```
#ifndef AREA51_BENCHMARK_V2_HPP
#define AREA51_BENCHMARK_V2_HPP

#include "algorithm.hpp"
#include "transform_iterator.hpp"
#include "utilities.hpp"

#include
#include
#include
#include
#include
#include
#include
#include

namespace shino
{
template
class benchmark
{
Generator gen;
std::tuple callables;
std::vector, sizeof...(Callables)>> timings;
std::vector inputs;
public:
using input_type = typename Generator::input_type;
static constexpr std::size_t function_count = sizeof...(Callables);

template,
typename ... ArgTypes>
benchmark(Gen &&generator, ArgTypes &&... args):
gen(std::forward(generator)),
callables(std::forward_as_tuple(std::forward(args)...))
{}

template,
typename Tuple>
benchmark(Gen &&generator, Tuple &&tup):
gen(std::forward(generator)),
callables(std::forward(tup))
{}

template>
void time(InputType &&input,
std::size_t runcount)
{
inputs.push_back(input);
time_all(std::make_index_sequence{},
std::forward(input), runcount);
}

template
void get_as(OutputIterator first)
{
auto converter = [](const auto &readings) {
std::array converted_readings;
std::transform(readings.begin(),
readings.e

Solution

After using it in some setups, I believe I found most of the strengths and weaknesses.

Strengths:

-
Great model. No matter in which situations I couldn't use it, those situations didn't make sense from correct benchmarking point of view (e.g. not "apples to apples comparison").

-
Nice decoupling of output format and presentation format. Programmers can use modules such as matplotlib.pyplot to draw beautiful plots. Although some more decoupling wouldn't hurt, for casual use it is fine.

-
Flexibility. I would say that it is possible to adhere public interface in any situation, although some edge cases might get extreme obfuscation.

-
Easy to use correctly, hard to use incorrectly.

Weaknesses:

-
Too generic. Even simple usage requires a lot of code. The best cure for illness would be writing some default generators.

-
Lacks a lot of functionality, though it might be due to immaturity (both me and the library).

-
Benchmarker doesn't try to save results by any possible means. Say there is a benchmark running for a few hours, and programmer forgot to create a folder(s) or the path has high restrictions. What will happen is the program will just crash and the results will be lost. The benchmarker has to give its life to preserve the results in any way possible. Also, it would be great if user could stop the benchmark if it takes too long, and ask using some onscreen text on what to do next.

-
God object. Or at least going to become so.

The library has a chance to become a google benchmark killer. But it is a long road to get there. Also, the timings_session should stay there to get single shot benchmarks.

Context

StackExchange Code Review Q#158074, answer score: 3

Revisions (0)

No revisions yet.