HiveBrain v1.2.0
Get Started
← Back to all entries
patternjavaModerate

JMH Benchmark Metrics Evaluation

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
metricsevaluationbenchmarkjmh

Problem

I'm currently attempting to learn some basic JVM optimization techniques using JMH.

I created the following bench to compare mid-index insertion performance between ArrayList and LinkedList.

How exactly do I measure score vs error in the final results?

The documentation states:


Your benchmarks should be peer-reviewed

I'm not quite sure how to validate the results, so I'm asking for a review of the following implementation to determine whether my workflow is correct. Any advice would be appreciated as I don't have much experience in performance evaluation techniques.

public class ListBench {

    static List arrayList = new ArrayList<>();

    static List linkedList = new LinkedList<>();

    private static int COUNT = 100;

    static {
        arrayList.add( 0 );
        linkedList.add( 0 );
    }

    @Benchmark
    @BenchmarkMode( Mode.Throughput )
    public static void arrayListBench() {

        for(int i = 0; i < COUNT; i++) {
            arrayList.add( mid( arrayList.size() ), i + 1 );
        }

    }

    @Benchmark
    @BenchmarkMode( Mode.Throughput )
    public static void linkedListBench() {

        for(int i = 0; i < COUNT; i++) {
            linkedList.add( mid( linkedList.size() ), i + 1 );
        }
    }

    public static int mid( int n ) {
        return n / 2;
    }

    public static void main( String[] args ) throws RunnerException {
        Options opt = new OptionsBuilder()
            .include( ListBench.class.getSimpleName() )
            .warmupIterations( 10)
            .measurementIterations( 10 )
            .forks( 1 )
            .build();

        new Runner( opt ).run();
    }

}


Init Params
java -jar target/benchmarks.jar ListBench -wi 10 -i 10 -f 1

Results

`# JMH 1.10-SNAPSHOT (released today)
# VM invoker: c:\Java\jdk_8\jre\bin\java.exe
# VM options:
# Warmup: 10 iterations, 1 s each
# Measurement: 10 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will syn

Solution

Important things:

-
Have you read JMH Samples?

-
Your benchmark does not
have steady state. You can actually see that with diminishing
performance iteration-to-iteration, and a large score error at the
end. Measuring non-steady state benchmarks is a tricky
business.

-
Looping in benchmarks is generally discouraged, because loop
unrolling optimizations, and subsequent code transformations may
affect the benchmarks in unpredictable ways. See
JMHSample_11_Loops and JMHSample_34_SafeLooping.

-
Single fork is almost never enough. Run-to-run variance is a very frequent contender in performance results.

-
The last, but not the least, you have to analyze benchmarks, not just running them. Use profilers to understand what is going on, wriggle experimental setup to see if it reacts to changes similar to your mental model, etc.

Stylistic things (or, things that make JMH tests more idiomatic, and therefore quickly understandable):

-
static-s are not handy for storing states, especially if the next thing you try is testing with multiple threads. Use @Setup methods instead. See JMHSample_05_StateFixtures.

-
static constants are bad substitutes for @Param. See JMHSample_27_Params.

-
You don't need main() method to run benchmarks. In fact, running with uberjar is more reliable, as every sample links the caveats from JMH page:


Running benchmarks from the IDE is generally not recommended due to
generally uncontrolled environment in which the benchmarks run.

-
You don't also need to build JMH to make benchmark runs (I see you are using 1.10-SNAPSHOT). All recent artifacts are available from Maven Central, and JMH page has a one-liner to generate the benchmark project from the archetype.

Context

StackExchange Code Review Q#90886, answer score: 10

Revisions (0)

No revisions yet.