patternModerate
What questions can denotational semantics answer that operational semantics can't?
Viewed 0 times
denotationalcananswersemanticswhatquestionsthatoperational
Problem
I am familiar with operational semantics (both small-step and big-step) for defining programming languages. I'm interested in learning denotational semantics as well, but I'm not sure if it will be worth the effort. Will I just be learning the same material from a different point of view, or are there insights I can only gain from understanding denotational semantics?
Solution
There is no real agreement what characterises denotational semantics (see also this article), except that it must be compositional. That means that if $\newcommand{\SEMB}[1]{\lbrack\!\lbrack #1 \rbrack\!\rbrack} \SEMB{\cdot}$ is the semantic function, mapping programs to their meaning, something like the following must be the case for all $n$-ary program constructors $f$ and all programs $M_1$, ..., $M_n$ (implicitly assuming well-typedness):
$$
\SEMB{f(M_1, ..., M_n)} = trans(f) (\SEMB{M_1}, ..., \SEMB{M_n})
$$
Here $trans(f)$ is the constructor corresponding to $f$ in the semantic domain.
Compositionality is similar to the concept of homomorphism in algebra.
Operational semantics is not compositional in this sense. Historically, denotational semantics was developed partly because operational semantics wasn't compositional. Following D. Scott's breakthrough order-theoretic denotational semantics of $\lambda$-calculus, most denotational semantics used to be order-theoretic. I imagine that -- apart from pure intellectual interest -- denotational semantics was mostly invented because at the time (1960s):
Part of the problem was that the notion of equality of programs was not as well-understood as it is now. I'd argue that both problems have been
ameliorated to a substantial degree, (1) for example by bisimilation
based techniques coming from process theory (which can be seen as a
specific form of operational semantics) or e.g Pitts work on
operational semantics and program equivalence, and (2) by the
developments of e.g. separation logic or Hoare logics derived as typed
versions of Hennessy-Milner logics via programming language embeddings
in typed π-calculi. Note that program logics (= axiomatic semantics)
are compositional, too.
Another way of looking at denotational semantics is that there are many programming languages and they all look kind-of similar, so maybe we can find a simple, yet universal meta-language, and map all programming languages in a compositional manner to that meta-language. In the 1960s, it was thought that some typed $\lambda$-calculus is that meta-language. A picture might say more than 1000 words:
What is the advantage of this approach? Maybe it makes sense to look
at it from an economic POV. If we want to prove something
interesting about a class of object program we have two options.
-
Prove it directly on the object level.
-
Prove that the translation to the meta-level (and back) 'preserves' the
property, and then prove it for the meta-level, and then push the result back to the object level.
The combined cost of the latter is probably higher than the cost of
the former, but the cost of proving the translation can be
amortised over all future uses, while the cost proving the property
for the meta-level is much smaller than that of the proof on the
object level.
The original order-theoretic approach to denotational semantics has so far not lived up to this promise, because complicated language features such as object orientation, concurrency and distributed computation have not yet been given precise order-theoretic semantics. By "precise" I mean semantics that matches the natural operational semantics of such languages.
Is it worth learning denotational semantics? If you mean order-theoretic approaches to denotational semantics, then probably not, unless you want to
work in the theory of programming languages and need to understand older papers. Another reason for learning order-theoretic approaches to denotational semantics is the beauty of this approach.
$$
\SEMB{f(M_1, ..., M_n)} = trans(f) (\SEMB{M_1}, ..., \SEMB{M_n})
$$
Here $trans(f)$ is the constructor corresponding to $f$ in the semantic domain.
Compositionality is similar to the concept of homomorphism in algebra.
Operational semantics is not compositional in this sense. Historically, denotational semantics was developed partly because operational semantics wasn't compositional. Following D. Scott's breakthrough order-theoretic denotational semantics of $\lambda$-calculus, most denotational semantics used to be order-theoretic. I imagine that -- apart from pure intellectual interest -- denotational semantics was mostly invented because at the time (1960s):
- It used to be difficult to reason about operational semantics.
- It used to be difficult to give axiomatic semantics to non-trivial languages.
Part of the problem was that the notion of equality of programs was not as well-understood as it is now. I'd argue that both problems have been
ameliorated to a substantial degree, (1) for example by bisimilation
based techniques coming from process theory (which can be seen as a
specific form of operational semantics) or e.g Pitts work on
operational semantics and program equivalence, and (2) by the
developments of e.g. separation logic or Hoare logics derived as typed
versions of Hennessy-Milner logics via programming language embeddings
in typed π-calculi. Note that program logics (= axiomatic semantics)
are compositional, too.
Another way of looking at denotational semantics is that there are many programming languages and they all look kind-of similar, so maybe we can find a simple, yet universal meta-language, and map all programming languages in a compositional manner to that meta-language. In the 1960s, it was thought that some typed $\lambda$-calculus is that meta-language. A picture might say more than 1000 words:
What is the advantage of this approach? Maybe it makes sense to look
at it from an economic POV. If we want to prove something
interesting about a class of object program we have two options.
-
Prove it directly on the object level.
-
Prove that the translation to the meta-level (and back) 'preserves' the
property, and then prove it for the meta-level, and then push the result back to the object level.
The combined cost of the latter is probably higher than the cost of
the former, but the cost of proving the translation can be
amortised over all future uses, while the cost proving the property
for the meta-level is much smaller than that of the proof on the
object level.
The original order-theoretic approach to denotational semantics has so far not lived up to this promise, because complicated language features such as object orientation, concurrency and distributed computation have not yet been given precise order-theoretic semantics. By "precise" I mean semantics that matches the natural operational semantics of such languages.
Is it worth learning denotational semantics? If you mean order-theoretic approaches to denotational semantics, then probably not, unless you want to
work in the theory of programming languages and need to understand older papers. Another reason for learning order-theoretic approaches to denotational semantics is the beauty of this approach.
Context
StackExchange Computer Science Q#63874, answer score: 13
Revisions (0)
No revisions yet.