patternCritical
Why are some programming languages "faster" or "slower" than others?
Viewed 0 times
whyarelanguagesprogrammingslowerthanothersfastersome
Problem
I have noticed that some applications or algorithms that are built on a programming language, say C++/Rust run faster or snappier than those built on say, Java/Node.js, running on the same machine. I have a few question regarding this:
I'd really appreciate if someone broke this down for me.
- Why does this happen?
- What governs the "speed" of a programming language?
- Has this anything to do with memory management?
I'd really appreciate if someone broke this down for me.
Solution
In programming language design and implementation, there is a large number of choices that can affect performance. I'll only mention a few.
Every language ultimately has to be run by executing machine code. A "compiled" language such as C++ is parsed, decoded, and translated to machine code only once, at compile-time. An "interpreted" language, if implemented in a direct way, is decoded at runtime, at every step, every time. That is, every time we run a statement, the intepreter has to check whether that is an if-then-else, or an assignment, etc. and act accordingly. This means that if we loop 100 times, we decode the same code 100 times, wasting time. Fortunately, interpreters often optimize this through e.g. a just-in-time compiling system. (More correctly, there's no such a thing as a "compiled" or "interpreted" language -- it is a property of the implementation, not of the language. Still, each language often has one widespread implementation, only.)
Different compilers/interpreters perform different optimizations.
If the language has automatic memory management, its implementation has to perform garbage collection. This has a runtime cost, but relieves the programmer from an error-prone task.
A language might be closer to the machine, allowing the expert programmer to micro-optimize everything and squeeze more performance out of the CPU. However, it is arguable if this is actually beneficial in practice, since most programmers do not really micro-optimize, and often a good higher level language can be optimized by the compiler better than what the average programmer would do.
(However, sometimes being farther from the machine might have its benefits too! For instance, Haskell is extremely high level, but thanks to its design choices is able to feature very lightweight green threads.)
Static type checking can also help in optimization. In a dynamically typed, interpreted language, every time one computes
Some languages always report runtime errors in a sane way. If you write
However, keep in mind that, when evaluating a language, performance is not everything. Don't be obsessed about it. It is a common trap to try to micro-optimize everything, and yet fail to spot that an inefficient algorithm/data structure is being used. Knuth once said "premature optimization is the root of all evil".
Don't underestimate how hard it is to write a program right. Often, it can be better to choose a "slower" language which has a more human-friendly semantics. Further, if there are some specific performance critical parts, those can always be implemented in another language. Just as a reference, in the 2016 ICFP programming contest, these were the languages used by the winners:
None of them used a single language.
Every language ultimately has to be run by executing machine code. A "compiled" language such as C++ is parsed, decoded, and translated to machine code only once, at compile-time. An "interpreted" language, if implemented in a direct way, is decoded at runtime, at every step, every time. That is, every time we run a statement, the intepreter has to check whether that is an if-then-else, or an assignment, etc. and act accordingly. This means that if we loop 100 times, we decode the same code 100 times, wasting time. Fortunately, interpreters often optimize this through e.g. a just-in-time compiling system. (More correctly, there's no such a thing as a "compiled" or "interpreted" language -- it is a property of the implementation, not of the language. Still, each language often has one widespread implementation, only.)
Different compilers/interpreters perform different optimizations.
If the language has automatic memory management, its implementation has to perform garbage collection. This has a runtime cost, but relieves the programmer from an error-prone task.
A language might be closer to the machine, allowing the expert programmer to micro-optimize everything and squeeze more performance out of the CPU. However, it is arguable if this is actually beneficial in practice, since most programmers do not really micro-optimize, and often a good higher level language can be optimized by the compiler better than what the average programmer would do.
(However, sometimes being farther from the machine might have its benefits too! For instance, Haskell is extremely high level, but thanks to its design choices is able to feature very lightweight green threads.)
Static type checking can also help in optimization. In a dynamically typed, interpreted language, every time one computes
x - y, the interpreter often has to check whether both x,y are numbers and (e.g.) raise an exception otherwise. This check can be skipped if types were already checked at compile time.Some languages always report runtime errors in a sane way. If you write
a[100] in Java where a has only 20 elements, you get a runtime exception. This requires a runtime check, but provides a much nicer semantics to the programmer than in C, where that would cause undefined behavior, meaning that the program might crash, overwrite some random data in memory, or even perform absolutely anything else (the ISO C standard poses no limits whatsoever).However, keep in mind that, when evaluating a language, performance is not everything. Don't be obsessed about it. It is a common trap to try to micro-optimize everything, and yet fail to spot that an inefficient algorithm/data structure is being used. Knuth once said "premature optimization is the root of all evil".
Don't underestimate how hard it is to write a program right. Often, it can be better to choose a "slower" language which has a more human-friendly semantics. Further, if there are some specific performance critical parts, those can always be implemented in another language. Just as a reference, in the 2016 ICFP programming contest, these were the languages used by the winners:
1 700327 Unagi Java,C++,C#,PHP,Haskell
2 268752 天羽々斬 C++, Ruby, Python, Haskell, Java, JavaScript
3 243456 Cult of the Bound Variable C++, Standard ML, PythonNone of them used a single language.
Code Snippets
1 700327 Unagi Java,C++,C#,PHP,Haskell
2 268752 天羽々斬 C++, Ruby, Python, Haskell, Java, JavaScript
3 243456 Cult of the Bound Variable C++, Standard ML, PythonContext
StackExchange Computer Science Q#71979, answer score: 104
Revisions (0)
No revisions yet.