In this article we will discuss about Green’s and Kowalski’s formulations used for solving robot problems.
1. Green’s Formulation:
One of the first attempts to solve robot problems was by Green (1969), who formulated them in such a way that a resolution theorem-proving system (a commutative system) could solve them. This formulation involved one set of assertions which described the initial state and another set which described the effects of the various robot actions on states. To keep track of which facts were true in which state, Green included a ‘state’ or ‘situation’ variable in each predicate.
The goal condition was then described by a formula with an existentially quantified state variable. That is, the system would attempt to prove that there existed a state in which a certain condition was true. A constructive proof method, then could be used to produce the set of actions which would create the desired state. In Green’s system, all assertions (and the negation of the goal condition) were converted to clause form for a resolution theorem proven, although other deduction systems could have been used as well.
An example problem will help to illustrate exactly how this method works. Unfortunately, the notation needed in these theorem proving formulations is a bit cumbersome, and the block-stacking examples which we have been using need to be simplified somewhat to keep the examples manageable.
ADVERTISEMENTS:
Suppose we have the initial situation depicted in Fig. 8.11. There are just four discrete positions on a table, namely, D, E, F, and G; and there are three blocks, namely, A, B and C, resting on three of the positions as shown. Suppose we name this initial state S0. Then, we denote the fact that block A is on position D is S0 by the literal ON (A, D, S0). The state name is made an explicit argument of the predicate.
The complete configuration of blocks in the initial state is then given by the following set of formulas:
ON (A, D, S0)
ADVERTISEMENTS:
ON (B, E, S0)
ON (C, F, S0)
CLEAR (A, S0)
CLEAR (B, S0)
ADVERTISEMENTS:
CLEAR (C, S0)
CLEAR (G, S0)
Now we need a way to express the effects which various robot actions might have on the states. In theorem-proving formulations, we express these effects by logical implications rather than by STRIPS – form rules.
For example, suppose the robot has an action which can ‘transfer’ a block x from position y to position z, where y and 2 might be either the names of other blocks which block x might be resting on or the names of positions on the table which block x might be resting on. Let us assume that both block x and position z (the target position) must be clear in order to execute this action. We model this action by the expression ‘trans(x, y, z)’.
ADVERTISEMENTS:
When an action is executed in one state, the result is a new state. We use the special functional expression do (action, state) to denote the function which maps a state into the one resulting from an action. Thus, if trans(x, y, z) is executed in state, s, the result is a state given by do [trans (x, y, z),s].
The major effect of the action modeled by trans can then be formulated as the following implication:
(All variables in assertions have implicit universal quantification).
ADVERTISEMENTS:
This formula states that if x and 2 are clear and if x is on y in state s, and if x and 2 are different, then x and y will be clear and x will be on 2 in the state resulting, from performing the action trans (x, y, z) in state s. (The predicate DIFF does not need a state variable because its truth value is independent of state).
But this formula alone does not completely specify the effects of the action. We must also state that certain relations are unaffected by the action. In systems like STRIPS, the rules use the convention that relations not explicitly named in the rule are unaffected. But here the effects and no n-effects alike need to be stated explicitly. So, we must have assertions for each relation not affected by an action.
For example, we need the following assertion to express that the blocks which are not moved stay in the same position:
On (u, v, s) (DIFF (u, x)]
ON (u, v, do [trans (x, y, z), s]).
and we would need another formula to state that block u remains clear if block u is clear when a block v (not equal to u) is put on a block w (not equal to u).
These assertions, describing what stays the same during an action, are sometimes called the frame assertions. In large systems, there may be many predicates used to describe a situation. Green’s formulation would require (for each action) a separate frame assertion for each predicate.
This representation could be condensed if we used a higher order logic, in which we could write a formula something like:
ᗄ(P)[P(s) → P[do(action, s)]
But higher order logics have their own complications.
After all of the assertions for actions are expressed by implications, we are ready to attempt to solve an actual robot problem.
Suppose we want to achieve the simple goal of having block A on block B.
This goal would be expressed as follows:
(ⴺs) ON (A, B,s)
The problem can now be solved by finding a constructive proof of goal formula from the assertions. Any reasonable theorem-proving method might be used.
As already mentioned, Green used a resolution system in which the goal was negated and all formulas were then put into clause form. The system then would attempt to find a contradiction, and an answer extraction process would find the goal state which exists. This state would, in general, be expressed as a composition of do functions, naming the actions involved in producing the goal state.
Instead of resolution, we could have used one of the rule-based deduction systems. The assertions describing the initial state might be used as facts, and the action and frame assertions might be used as production rules.
The example just cited is trivially simple, of course-we didn’t even need to use any of the frame assertions in this case. (We certainly would have had to use them if, for example, our goal had been the compound goal [ON (A, B, s) ˄ ON(B, C,s)]. In that case, we would have had to prove that B stayed on C while putting A on B).
However, in even slightly more complex examples, the amount of theorem-proving search required to solve a robot problem using this formulation can grow so explosively that the method becomes quite impractical. These search problems together with the difficulties caused by the frame assertions were the major impetus behind the development of the STRIPS problem-solving system.
2. Kowalski’s Formulation:
Kowalski (1974) has suggested a different formulation. It simplifies the statement of the frame assertions. What would ordinarily be predicates in Green’s formulation are made terms.
For example, instead of using the literal ON (A, D, S0) to denote the fact that A is on D in state S0 we use the literal HOLDS [on (A, D), S0]. The term on (A, D) denotes the ‘concept’ of A being on D; such concepts are treated as individuals in our new calculus. Representing what would normally be relations as individuals is a way of gaining some of the benefits of a higher order logic in a first-order formulation.
The initial state shown in Fig. 8.11 is then given by the following set of expressions:
1. POSS (S0)
2. HOLDS [on (A,’D), S0]
3. HOLDS [on (B, E), S0]
4. HOLDS [on(C, F), S0]
5. HOLDS [clear (A), S0]
6. HOLDS [clear (B), S]
7. HOLDS [clear(C), S0]
8. HOLDS [clear (G), S0]
The literal POSS (S0) means that the state S0 is a possible state, that it is, one that can be reached.
Now, we express part of the effects of actions (the “add-list” literals) by using a separate HOLDS literal for each relation made true by the action.
In the case of our action trans (x, y, z), we have the following expressions:
9. HOLDS [clear(x), do [trans(x, y, z),s]]
10. HOLDS [clear(y), do [trans(x, y, z),s]]
11. HOLDS [on(x, z),do[trans(x, y, z),s]]
(Again, all variables in the assertions are universally quantified).
Another predicate, PACT, is used to say that it is possible to perform a given action in a given state, that is, the preconditions of the action match that state description. PACT (a, s) states that it is possible to perform action a in states.
For our action trans, we thus have:
12. [HOLDS [clear(x), s]˄ HOLDS[clear(z), s]
˄ HOLDS [on(x, y), z] ˄ DIFF (x, z)}
→ PACT [trans(x, y, z), s]
Next we state that if a given state is possible and if the preconditions of an action are satisfied in that state, then the state that action is also possible:
13. [POSS(s) (PACT (u, B)] → POSS [do (u, B)]
The major advantage of Kowalski’s formulation is that we need only one frame assertion for each action.
In our example, the single frame assertion is:
14. ([HOLDS (v, s)˄ DIFF[v, clear( z)]˄ DIFF[v, on(x, y)]}
HOLDS ([v, do [trans (x, v, z), s]}
This expression quite simply states that all terms different than clear (z) and on (x, y) still HOLD in all states produced by performing the action trans (x, y, z).
A goal for the system is given, as usual, by an expression with an existentially quantified state variable.
If we wanted to achieve B on C and A on B, our goal would be:
(ⴺs) {POSS(s) ˄ HOLDS [on (B, C), s] ˄ HOLDS[on(A, B), s]}
The added conjunct, POSS(s), is needed to require that states be reachable.
Assertions 1-14, then, express the basic knowledge needed by a problem solver for this example. If we were to use one of the rule-based deduction systems to solve problems using this knowledge, we might use assertions 1-11 as facts and assertions 12-14 as rules.
The details of operation of such a system would depend on whether the rules were used in a forward or backward manner and on the specific control strategy used by the system. For example, to make the rule-based system ‘simulate’ the steps which would be performed by a backward production system using STRIPS- form rules, we would force the control strategy of the deduction system, first, to match one of assertions 9-11 (the ‘adds’) against the goal. (This step would establish the action through which we were attempting to work backward).
Next, assertions 13 and 12 would be used to set up the preconditions of that action. Subsequently, the frame assertion, number 14, would be used to regress the other goal conditions through this action. All DIFF predicates should be evaluated whenever possible. This whole sequence would then be or, one of the sub-goal predicates until a set of sub-goals was produced that would unity with fact assertions 1-8.
Other control strategies could, no doubt, be specified which would allow rule- based deduction system to ‘simulate’ the steps of STRIPS and other more complex robot problem-solving systems. One way to specify the appropriate control strategies would be to use the ordering conventions on facts and rules which are used by the PROLOG language.
Comparing deduction systems with a STRIPS-like system, we must not be tempted to claim that one type can solve problems which the other cannot. In fact, by suitable control mechanisms, the problem-solving traces of different types of systems can be made essentially identical.
The point is that to solve robot problems efficiently with deduction systems requires specialised and explicit control strategies which are implicitly ‘built-into’ the conventions used by systems like STRIPS. STRIPS-like robot problem-solving systems would appear, therefore, to be related to the deduction- based systems in the same way that a higher level programming language is related to lower level ones.