HiveBrain v1.2.0
Get Started
← Back to all entries
snippetMinor

how to do incremental construction of the minimal model in logic programming?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
theconstructionprogramminglogicmodelhowminimalincremental

Problem

I was reading a book titled "Essentials of Logic Programming.", most parts of the book are easy to understand. but now having a problem with Theme 45: incremental constructions of the minimal model. The relevant parts are shown below.

Some notations: $P$ is a definite program, $B(P)$ is the Herbrand base of $P$, $G(P)$ is a ground instantiation of $P$, $MM(P)$ is a minimal model of $P$.

Theme 45 explains the incremental constructions of the minimal model as follows.

The method starts by choosing some interpretation $I_1 \subseteq B(P)$. Later on, we shall see that there is a preferred choice of $I_1$ and that not all choices are adequate. Choosing $I_1$ can be viewed as guessing which atoms have to be true in any model of $P$. Now consider any clause in $G(P)$
kind(Chris)
q if body

if our guess assigns true to all atoms in this body then an immediate consequence of that guess is that q should also be made true, for otherwise the clause as a whole would be made false. Our next iterate $I_2$ comprise just those heading atoms q made true by this argument. This new iterate will not necessarily contain all those body atoms which justified the introduction of the new atoms q; this does not matter - successive iterates will eventually introduce and retain all of these atoms which must be true in any model of the program. Consider this example of a program $P$ on the domain $H=\{ dov,chris\}$:

kind(X) if nice (X)

nice(X)

whose ground instantiation $G(P$ is highly abbreviated form is

kind(dov) if nice(dov)

kind(Chris) if nice(Chris)

nice(dov)

(where K=kind, N=nice, D=dov,C=Chris)

and suppose we choose $I_1= \{ kind(dov),nice(Chris)\}$.
Applying the above method yields $I_2= \{ nice(dov), kind(Chris)\}$ and $I_3= \{ nice(dov), kind(dov)\}$
All further iterates merely replicates $I_3$, which is accordingly referred to as a fixpoint. We have, in fact, converged upon the minimal model $MM(P)$.

Let me explain how I understand or did not understand this example.

  • is

Solution

this paragraph describes the immediate consequences operator $T_P(I) = \{ a \mid \text{there exists a rule } r \in G(P) \text{ with } head(r)=a \text{ and } body(r) \subseteq I \}$ for some interpretation $I$ modelled as a set of ground terms.

What is the significance of this operator ? We start from an interpretation and try to find a model. Now it might be the case that there are rules where the body is true, but the head is not yet true. In order to create a model we have to add the new head to the interpretation which is done by $T_P$. Since we are especially interested in the least model, we also don't want unnecessary items, i.e. elements for which is no rule with a true body. This is the second effect of this operator.

Now we look at the example. They start with $I_1 = \{ kind(dov),nice(Chris)\}$ and we will check every rule from $G(P)$ to compute $I_2 = T_P(I_1)$

  • kind(dov) if nice(dov): nice(dov) is not in $I_1$, so we don't need kind(dov) in $I_2 = \emptyset$



  • kind(Chris) if nice(Chris): nice(Chris) is inside $I_1$ we need kind(Chris) in $I_2 = \{ kind(Chris)\}$



  • nice(dov): This rule has no body so is always true. So we also need this in the body. Now $I_2 = \{kind(Chris), nice(dov) \}$



Now we repeat this process with $I_2$ and compute $I_3 = T_P(I_2)$. At the start $I_3 = \emptyset$ as previously.

  • kind(dov) if nice(dov): Now nice(dov) is in $I_2$, so $I_3 = \{kind(dov) \}$



  • kind(Chris) if nice(Chris): nice(Chris) is not in $I_2$ so $I_3$ stays the same.



  • nice(dov): is the same as previously



So $I_3 = \{ kind(dov), nice(dov) \}$. Try it with this again and you will notice that $I_4 = T_P(I_3)$ equals $I_3$ is a fixed-point of $T_P$. This is significant as we can now stop our computation.

Now is this dependent on the program and we were lucky ? If you analyse the operator $T_P$ on interpretations, you will notice that it is monotonouos and by the Knaster-Tarski-fixed-point-theorem exists always a least fixed point. This can be proven to be the same as the minimal model of a definite logic program.

The interpretation $I_1$ was choosen arbitrarly. Does it matter which interpretation we start with ? Consider the following logic program $P_2$ (which is already grounded):

$ p \leftarrow q \\ q \leftarrow p $

You will notice that it has two fixed-points $\emptyset$ and $\{ p,q\}$. Each of them is a model, but $\emptyset$ is the least model. This is the result by starting with $\emptyset$. In fact, the Kleene-fixed-point-theorem tells us that starting with the minimum element is the way to calculate the least fixed-point (if some conditions hold).

If you would start however with $\{p\}$ you would alternate between $\{p\}$ and $\{q\}$ and would never reach a fixed-point, so the choice matters.

Context

StackExchange Computer Science Q#162204, answer score: 2

Revisions (0)

No revisions yet.