patternModerate
Why ⊢ for affirmative predicates and ⊨ for ¬negations?
Viewed 0 times
predicateswhynegationsaffirmativeforand
Problem
I read a book which says that in Predicate Calculus, syntactic theorem proving is identical (complete and sound) with semantic entailment and this is very useful because it is easier to prove positive witnesses in natural deduction whereas when you seem to fail you better switch to semantic account to demonstrate a counter-example.
Particularly Logic in computer science says
to show that Γ ⊢ψ is valid, we need to provide a proof of ψ from Γ.
Yet, how can we show that ψ is not a consequence of Γ? Intuitively,
this is harder; how can you possibly show that there is no proof of
something? You would have to consider every ‘candidate’ proof and show
it is not one. Thus, proof theory gives a ‘positive’ characterisation
of the logic; it provides convincing evidence for assertions like ‘Γ
⊢ ψ is valid,’ but it is not very useful for establishing evidence
for assertions of the form ‘Γ ⊢ φ is not valid.’
Semantics, on the other hand, works in the opposite way. To show
that ψ is not a consequence of Γ is the ‘easy’ bit: find a model in
which all φ_i are true, but ψ isn’t. Showing that ψ is a consequence
of Γ, on the other hand, is harder in principle. For propositional
logic, you need to show that every valuation (an assignment of truth
values to all atoms involved) that makes all φ_i true also makes ψ
true. If there is a small number of valuations, this is not so bad.
However, when we look at predicate logic, we will find that there are
infinitely many valuations, called 'models' from hereon, to consider.
Thus, in semantics we have a ‘negative’ characterisation of the logic.
We find establishing assertions of the form ‘Γ ⊭ ψ’ (ψ is not a
semantic entailment of all formulas in Γ) easier than establishing ‘Γ
⊨ ψ’(ψ is a semantic entailment of Γ), for in the former case we need
only talk about one model, whereas in the latter we potentially have
to talk about infinitely many. All this goes to show that it is
important to st
Particularly Logic in computer science says
to show that Γ ⊢ψ is valid, we need to provide a proof of ψ from Γ.
Yet, how can we show that ψ is not a consequence of Γ? Intuitively,
this is harder; how can you possibly show that there is no proof of
something? You would have to consider every ‘candidate’ proof and show
it is not one. Thus, proof theory gives a ‘positive’ characterisation
of the logic; it provides convincing evidence for assertions like ‘Γ
⊢ ψ is valid,’ but it is not very useful for establishing evidence
for assertions of the form ‘Γ ⊢ φ is not valid.’
Semantics, on the other hand, works in the opposite way. To show
that ψ is not a consequence of Γ is the ‘easy’ bit: find a model in
which all φ_i are true, but ψ isn’t. Showing that ψ is a consequence
of Γ, on the other hand, is harder in principle. For propositional
logic, you need to show that every valuation (an assignment of truth
values to all atoms involved) that makes all φ_i true also makes ψ
true. If there is a small number of valuations, this is not so bad.
However, when we look at predicate logic, we will find that there are
infinitely many valuations, called 'models' from hereon, to consider.
Thus, in semantics we have a ‘negative’ characterisation of the logic.
We find establishing assertions of the form ‘Γ ⊭ ψ’ (ψ is not a
semantic entailment of all formulas in Γ) easier than establishing ‘Γ
⊨ ψ’(ψ is a semantic entailment of Γ), for in the former case we need
only talk about one model, whereas in the latter we potentially have
to talk about infinitely many. All this goes to show that it is
important to st
Solution
The answer is really in the first paragraph of the quoted text, so what I'm about to attempt is really a rephrasing of that.
Remember that $\Gamma \vdash \psi$ means that there exists a formal proof of $\psi$ (in the system from $\Gamma$ blah blah blah). That is, there's some way, starting from the axioms, to syntactically manipulate the symbols according to the rules of the system to end up with $\psi$.
On the other hand, $\Gamma \models \psi$ is a statement about the semantics of the system, i.e. the interpretation of what the symbols
Remember that $\Gamma \vdash \psi$ means that there exists a formal proof of $\psi$ (in the system from $\Gamma$ blah blah blah). That is, there's some way, starting from the axioms, to syntactically manipulate the symbols according to the rules of the system to end up with $\psi$.
On the other hand, $\Gamma \models \psi$ is a statement about the semantics of the system, i.e. the interpretation of what the symbols
mean', so $\Gamma \models \psi$ when there's no interpretation that renders all of $\Gamma$ true and $\psi$ false.
So, back to the question. When we want to show that $\Gamma \vdash \psi$ is valid, a formal proof is very nice, because it has no dependence on the meaning (semantics) of the symbols, it is simply a consequence of the rules of the system. On the other hand, if we suspect $\Gamma \vdash \psi$ is not valid, it's very hard to give a formal proof - you would have to show that every possible route to a formal proof doesn't work, i.e. that no matter what potentially-very-convoluted order you apply the rules of the system in, starting from the members of $\Gamma$, you can't get to $\psi$. Just to reiterate, you would effectively have to consider every possible proof, and show that it didn't work.
Semantic entailment kind of works the other way around. To show that $\Gamma \models \psi$ is not valid, all' we need to find is some interpretation such that everything in $\Gamma$ is true, but $\psi$ is false - i.e. we just have to find that one counterexample. Showing that $\Gamma \models \psi$ is valid is the part that requires lots of cases - we have to show that $\psi$ holds for every interpretation for which everything in $\Gamma$ also holds.Context
StackExchange Computer Science Q#21596, answer score: 15
Revisions (0)
No revisions yet.