HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

What are flops and how are they benchmarked?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
whatareflopsbenchmarkedhowandthey

Problem

Apple has just proudly stated that their new mac pro will be able to give up to 7 teraflops of computing power. Flops stands for Floating Point Operations Per Second. How exactly is this benchmarked though? Certain floating point operations are much heavier than others, so how exactly would a FLOP be a benchmark for computing power?

Solution

As far as I know, they give peak performance values. Given clock speed $f$ (in Hz) and number of cycles per (shortest) floating point operation $c$, the peak performance is essentially $f \cdot c$.

Of course, modern machines execute multiple floating point operations in parallel, have multiple cores etc. A more accurate formula can be found on Wikipedia.

This measure ignores all of the pain any real program encounters, e.g. cache misses and pipeline stalls. That is, any (reasonable) benchmark will not reach this number. But as an (unattainable) ultimate figure it can be useful to compare machines and even programs (how close to peak performance do they get?) -- take such statements with a grain of salt, always.

Context

StackExchange Computer Science Q#12606, answer score: 3

Revisions (0)

No revisions yet.