The following article will guide you about how to write a sentence into clause forms.
The technique for representing a complex sentence into simple sentences so that resolution can be applied.
The technique is described below:
Algorithm for Converting a Sentence into Clauses (CNF):
Step I: Elimination of if-then operator:
ADVERTISEMENTS:
Replace”→” operator by ¬ & ∨operator.
By replacing ‘if-then’ operator by negation and OR operator, we find.
ᗄX (¬ (Child (X)∧ ⴺY (Takes (X, Y) ∧ Biscuit (Y))) ∨ Loves (john, X)
Step II: Reduction of the scope of negation:
ADVERTISEMENTS:
Replace ¬ sign by choosing any of the following:
In the present context, we rewrite the sentences as:
Step III: Renaming the variable within the scope of quantifiers:
ADVERTISEMENTS:
Rename ⴺX by ⴺY when {ⴺX} is a subset/proper subset of {ᗄX}. In the present context, since X and Yare distinct, the above operation cannot be carried out.
Step IV: Moving of quantifiers in the front of the expression:
Bring all quantifiers at the front of the expression.
ADVERTISEMENTS:
Applying this on the example yields.
Step V: Replacing existential quantifier as Skolem function of essential quantifiers:
When an existential quantifier (Y) precedes an essential quantifier (X), replace Y as S (X), where S is the Skolem function. In this example, since Y is not a subset of X, such a situation does not arise. Also the essential quantifier is dropped from the sentence.
ADVERTISEMENTS:
Step VI: Putting the resulting expression in conjunctive normal form (CNF):
For example, if the original expression is in the from P ∨ (Q ∧ R), then replace it by (P ∨ Q) ∧ (Q ∧ R).
In the present context, the resulting expression corresponding to expression (3) being in CNF, we need not do any operation at this step.
Step VII: Writing one clause per line:
If the original expression is of the following CNF, then rewrite each clause/line, as illustrated below.
Original Expression:
After writing one clause per line, the resulting expression become as follow:
This algorithm can be illustrated with the help of following Examples:
Example 1:
Rewrite the following sentences in FOL:
1. Coconut-crunchy is a biscuit.
2. Mary is a child who takes coconut-crunchy.
3. John loves children who take biscuits.
4. John loves Mary.
The above statements can be represented in FOL using two qualities X and Y.
1. Biscuit (coconut-crunchy)
2. Child (mary) ∧Takes (mary, coconut-cruchy)
3. ∀X ((child (X))∧ ⴺY (Takes (X, Y) ∧ Biscuit (Y)) → Loves (john X)
4. Loves (john, mary)
With reference to the above, we get the following line as the final expression:
Child (X), Takes (X, Y), Biscuit (Y) → Loves (john, X).
It maybe noted that the resulting expression, derived above, is not much different from the expression (3). This, however, is not the case for all complex sentences, for example, let us consider the following complex sentence and the clause forms corresponding to that.
Example 2:
Convert the following expression into clausal form:
The two clauses of the final form are understood to be universally quantified in the variable y and to have conjunction symbol connecting them.
Note may be taken of the fact that the set of clauses produced are not equivalent to the original expression, but satisifiability is retained. It is quite often possible in simple problems to write down statements can be converted in clausal form without working through all above process step-by-step.
Example 3:
ᗄX (Loves (ravana, X) ¬ Female (X))
∧ⴺX(¬ Loves (X, Brother-of (X) ∧ Female (X))
The clause forms for the above expression are:
(a) Loves (ravana, X) → Female (X)
(b) Loves (s(X), Brother-of (s(X))), Female (X) →⊥
Where the meaning of the first clause is obvious, while the second clause means that it is impossible that there exists a female X, who loves her brother. The inverted T is called a Falsum operator, which is opposite to Truam (T), meaning that the expression is true.
The Basis of Inference (Resolution):
Let us understand first substitution-
Consider literal, Man (Gaurav)
and its complement ¬ Man (Gaurav)
Consider together, Man (Gaurav) and ¬ Man(Gaurav) produce a contradiction (produces an empty clause).
But the clauses Man (Gaurav) and ¬ Man (Saurabh) do not produce contradiction. Can you find the reason? To establish contradiction there should exists a substitution. A simple recursive method of substitution is called unification.
For unification to work the following procedure is adopted:
Procedure:
Check predicate symbols, if same, proceed, otherwise report failure. For example, LOVE (Rama, Ravana) and HATE (Ram, Ravana) cannot be unified because the predicates are different though the arguments are the same.
First, the predicate symbols should match only then we check the arguments, one pair at a time. If first matches, continue with second and so on. To test each argument pair call the unification procedure recursively. Any substitution which makes two or more expressions equal is called a unifier for the expressions.
The following are the rules of unification:
The following examples shall explain unification.
Example 1:
Unify the expressions:
P(x,y)
P(x, z) P(y,z)
compare x and y, if x is replaced by y/x or y is replaced by x/y they become after, first substitution P(y, y), P(y, z). Now compare the second argument y and z. Substituting z/y or y/z in the first and second P literals respectively for the second argument make the two literals similar. So the two literals match when
x is substituted by y/x and
y is substituted by z/y
in the first literal the two literal now become:
P (y, z)
that is the same. P (y, z)
Example 2:
The two literal Hate (x, y) and
Hate (Daryodana, z) could be unified with any one of the following substitutions
(Daryodana/x, z/y)
(Daryodana/x, y/z)
(Daryodana/x, Bhim/y Bhim/z)
(Daryodana/x, Arjun/y Arjun/z)
These all equivalents well match despite the lexical variation. But the last two substitutions though do produce the match cannot be considered good substitution because more substitutions are made than those required absolutely. So the most general unifier need be determined.
Unification is very useful in natural language parsers.
Unification of Prodicates:
We have seen how by making substitutions different logical expressions look identical. This process is called Unification and is a key component of all first order inference algorithms. The UNIFY algorithm takes the two sentences and returns a unifier for them, if one exits.
UNIFY (p, q) = θ
where, SUB (θ, p) = SUB (θ, q)
Example: Suppose we have a query:
Whom does Ravana know? or Knows (Ravana, x)
Some answers to this query can be found by building all sentences in the K.B. which unify with knows (Ravana) the results are shown below after unification which have to be in the k.B.
Knows (Ravana, x)
Knows (Ravana, x)
Knows (y, Rama)
Knows (y, mother (y))
Knows (x, Radha)
The following are the results by unifying the given sentences with knows:
Knows (Ravana, x)
UNIFY (Knows (Ravana, x) Knows (Ravana, Sita) (x/Sita) (read the sign ‘/’ as is substituted by)
UNIFY (Knows (Ravana, x) Knows(y, Ram) = (x / Ram, y/Ravana)
UNIFY (Knows (Ravana, x) Knows(y, Mother(y) = (y/Ravana, x/mother (Ravana)
UNIFY (Knows (Ravana, x) Knows(x, Radha) = fail
In the last unification the two constants (Ravana and Sita) cannot be instantiated. This problem can be tackled by assigning different variables to sentences
Knows (Ravana, x)
and Knows (x, Radha)
say by changing x in Knows (x, Radha) to y making it Knows (y, Radha).
Now the unification will work by standardising one of the two sentences being unified which means renaming its variables to avoid name clashes. This is called Standardizing apart of the sentences being unified.
Unification works well for two argument. But there could be that the unifiers are more than one such Unifier, then a difficulty arises as illustrated below, for example.
The two predicates could be, knows (Ravana, x) and knows (y, z) are unified.
On substitutions
The first substitution gives Knows (Ravana, z) and second substitution gives knows (Ravana, Ravana). The second result can be obtained from the first by an additional substitution (z /Ravana) OR in other words the first unifier is more general than the second because it places fewer restrictions on the values of the variable. We thus conclude that for every unifiable pair of expressions there is a single most- general unifier (MGU). In this case it is (y/Ravana, x/z).
This procedure unification finds MGU of a set of unifiable expressions or reports failure when the expressions are not unifiable. Typically, we use unification to discover if one literal can match another one. When variables are present in both literals and these variables may have terms substituted for them which would make the literals identical .
This process of matching one expression to another template expression is sometime called pattern matching. It plays a key role in AI systems. The unification process is more general than what is usually meant by pattern matching.
Conditions of Unification:
(i) Both the predicates to be unified should have an equal number of terms.
(ii) Neither ti nor si can be a negation operator, or predicate or functions of different variables, or if ti = term belonging to si, or if si, = term belonging to ti then unification is not possible..
Unification Algorithm:
Given, Predicates P (t1 t2…, tn and Q (S1, S2,.. .,Sm )
To decide whether the predicates P and Q are unifiable and a set S which includes the possible substitution
Procedure Unification (P, Q, S, unify)
Begin
S: = Null;
While P and Q contain a Next-symbol do
Begin
Symb1: ≡ Next-symbol (P);
Symb2: ≡ Next-symbol (Q);
If Symb1 ≠ Symb2 Then do
Begin Case of:
Symb1 or Symb2 = Predicate: Unify: = tail;
Symb1 = constant and symb2 = different-constant: Unify: = fail;
Symb1 or Symbl2 =…, ¬ Unify: = fail;
Symb1 and Symb2 = function: Unify: = fail;
Symb1 = variable and Symb2 = term and variable ɛ term”: Unify: = fail:
Symb2 = variable and Symb1 = term and variable E term: Unify: = fail;
Else If Symb1 = variable or constant and Symb2 = tenn Then do
Begin
S: = S∪ {variable or constant/term};
P: ≡ P (variable or constant/term);
End;.
Else If Symb2 = variable or constant and Symb1 = term Then do
Begin
S: = S∪ {variable or constant/term);
Q: = P {variable or constant/term};
End;
End Case of;
End while;
If P or Q contain a Next-symbol Then Unify: = fail
Else Unify = Success;
End.
Most General Unifier MGU:
Example 1:
The MGU (if it exists) is not necessarily unique. Given the set of equations:
Example 2:
Find the MGU of A (x, f (g(x)), a) and A (b, y z)
Solution:
The SUBST started with an empty substitution and has made some substitutions which unifies both the causes.
Hence the MGU is:
Resolution in Predicate Logic:
In propositional logic it is easy to examine that the two complementary literals cannot both be true at the same time. Simply look for P and ∼ P. In predicate logic this matching process is more complicated since the arguments of the literals must also be compared.
Since man(x) and ¬ man (bobby) can be unified (for x = bobby) the above two expressions are unifiable. This corresponds to the intuition that man(r) cannot be true for all x if there is known to be some x, say bobby, for which man (x) is false. Thus, in order to use resolution for expressions in predicate logic to draw inference (theorem proving) rules we use the unification algorithm of locating pairs to literals which cancel out.
Let us now assume that we want to resolve two clauses:
1. man (Daryodana)
2. ¬ man (x) v mortal (x)
The literal man (Daryodana) can be unified with the literal ¬ man(x) with substitution Daryodana/x, meaning thereby that if x = Daryodana is true and therefore ¬ man (Daryodana) is false. But we cannot simply cancel out the two man literals as we had done in the propositional logic and generate the resolvent mortal (x).
Clause 2, here states that for x, either ¬ man(x) or mortal (x) is true. So, for it to be true, we can now conclude that only that mortal (Daryodana) must be true. It is necessary that mortal (x) be true for all x, since for some values of Man(x) might be true, making mortal (x) irrelevant to the truth of the complete clause.
So the resolvent generated by clauses 1 and 2 must be mortal (Daryodana) which we get by applying the result of the unification process to the resolvent. The resolution process can then proceed from there to discover whether mortal (Daryodana) leads to a contradiction with other available clauses.
With the help of this example, we have illustrated how the variables are standardised in the process of converting the expression in to clause form. This standardisation of variables also makes easy the use of the unifier to perform substitutions in order to create the resolvent. If two instances of the same variable occur they must be given identical substitutions.
Now the resolution Algorithm can be easily stated for theorem proving in FOL assuming a set of given set of statements F and a statement P to be proved.
Algorithm Resolution:
1. Convert all the statements of F to clause form.
2. Negative P and convert the result to clause form. Add it to the set of clauses obtained in 1.
3. Repeat until either a contradiction is found, no progress can be made or a predetermined amount of effort has been spent.
i. Select two clauses. Call these the parent clauses.
ii. Resolve them together. The resolvent will be the disjunction of all the literals of both parent clauses with appropriate substitutions performed and with the following exception. If there is one pair of literals T1 and T2 such that one of the parent clauses contains T1 and the other contains T2 and if T1 and T2 are unifiable, then neither T1 nor T2 should appear in the resolvent. T1 and T2 are called complimentary literals. Use the substitution produced by the unification to create the resolvent. If there is more than one pair of complimentary literals, only one pair should be omitted from the resolvent.
iii. If its resolvent is empty clause, then a contradiction has been found. If it is not, then add it to the set of clauses available to the procedure.
Example:
Theorem Proving in FOL, with Resolution Principle,:
Suppose, we have to prove a theorem Th from a set of axioms. We denote it by
(A1A2,…,An) = Th
Let
A1 = Biscuit (coconut-crunchy)
A2 = Child (mary) ∧ Takes (mary, coconut-crunchy)
A3 = ᗄX (Child (X) ∧ ⴺY (Takes (X, Y) a Biscuit (Y))) → Loves (john, X)
and A4 = Th = Loves (john, mary) A4
First of all, let us express A1 through A4 in CNF. Expressions A1 and A4 are already in CNF. Expression A2 can be converted into CNF by breaking it into two clauses:
Child (mary) and
Takes (mary, coconut-crunchy)
Further, the CNF of expression A3 is becomes after simplification
¬ Child (X)∨ ¬ Takes (X, Y) ∨ ¬ Biscuit (Y) ∨ Loves (john, X)
Explanation:
In order to prove the theorem we have formed pairs of clauses, one of which contains a positive predicate and the other one of the same predicate in the negative form. By Robinson’s rule, to be proved subsequently both the complimentary predicates will drop out. Suitable values of the variables used for unification should be substituted in the resulting expression. The resolution tree is shown in Fig. 6.4 to prove the theorem (Goal).
John loves Mary: loves (john, mary).
Heuristics to Speed Up the Process of Resolution:
Pure resolution search tends to be slow because of:
1. There are many clauses in the initial KB.
2. Each step adds a new clause (which can be used).
3. Number of possible resolution combinations explodes.
So because of these difficulties the resolution process.
The resolution process will find contradiction, if it exists when the choice of clauses to resolve together at each step is made judicious.
But this process can be speeded up with the help of certain heuristics.
These are:
1. Only resolve pairs of clauses which contain complimentary literals, since only such resolutions produce new clauses which are harder to satisfy than their parents. To facilitate this select clauses by the predicates they contain combined with an indication of whether the predicate is negated. Then, given a particular clause, possible resolvents which contain a complimentary occurrence of one of its predicates can be located directly.
2. Eliminate certain clauses as soon as they are generated so that they cannot participate in later resolutions. Two kinds of clauses should be eliminated: Tautologies (which can never be satisfied) and clauses which are subsumed by other clauses (which are easier to satisfy).
3. Whenever possible, resolve either with one of the clauses which is part of the, statement we are trying to refute or with a clause generated by a resolution, with such a clause. This is called the set-of-support strategy and corresponds to the intuition that the contradiction we are looking for must involve the statements we are trying to prove. Any other contradiction would say that the previously believed statements were inconsistent. But care must be asserted not to lose completeness.
4. Whenever possible, resolve with clauses which have a single literal. Such resolutions generate new clauses with fewer literals than the larger of their parent clauses and thus are probably closer to the goal of a resolvent with zero terms. This method is called unit preference strategy.
This inference process can be captured as a single inference rule which we call Generalized Modus Ponens: For atomic sentences Pi, Pi‘ and q, where there is a substitution θ such that SUB (θ, p’i) = SUB (θ, p’i). for all i,
There are n + 1 premises to this rule; the n atomic sentences p1 and the one implication. The conclusion is the result of applying the substitution G to the consequent q.
For our example considered earlier:
p1 ‘ is King (Ravana) → p1 is King (x)
p2‘ is Greedy (y) → p1is King (x)
θ is (x/Ravana. y/Ravana) → q is Evil (x)
SUBS (θ, q) is Evil (Ravana).
It is easy to show that Generalized Modus Ponens is a sound inference rule. First, we observe that, for any sentence p whose variables are assumed to be universally quantified and for any substation θ,
p ╞ SUB (θ, p)
This holds for the same reasons which the Universal Instantiation rule holds. It holds in particular for a θ which satisfies the conditions of the Generalized Modus Ponens rule. Thus from p1 … pn we can infer
SUB (θ, p 1;) ∧…∧ SUB (θ, pn )
and from the implication p1 ,… ∧ pn ⇒ q we can infer
SUB (θ. P1) ∧…∧ SUB (θ. Pn ⇒ SUB (θ . q).
Now, θ in Generalized Modus Ponens is defined so that SUBS (θ, pn) = SUBS (θ, Pi) = SUBS (θ, pi) for all z; therefore the first of these two sentences matches the premise of the second exactly. Hence SUBS (θ, q) follows by Modus Ponens.
Generalized Modus Ponens is a lifted version of Modus Ponens − it raises Modus Ponens from propositional to first-order logic. The key advantage of lifted inference rules over propositionalization is that they make only those substitutions which are required to allow particular inference to proceed. However, Generalized Modus ponens is less general than Modus Ponens.