In this article we will discuss about:- 1. Components of an Expert System 2. Architecture of an Expert System 3. Benefits and Limitations.
Components of an Expert System:
An expert system consists of the following main components:
1. Knowledgebase.
2. Inference engine — a reasoning mechanism and heuristics for problem solving (searching techniques).
ADVERTISEMENTS:
3. Explanatory component.
4. Human — Machine Interface (Dialogue component).
These components are shown in Fig. 12.1 and Fig. 12.2. The two figs, are virtually the same.
Explanation:
It is not acceptable for an expert system to take decisions without being able to provide an explanation, for it has taken decisions. Users using these expert systems need to be convinced of the validity of the conclusion drawn before applying it to their domain. They also need to be convinced that the solution is appropriate and applicable in their circumstances.
Knowledge engineers building the expert system also need to examine the reasoning behind decisions in order to access and evaluate the mechanisms being used. If explanation component is not provided it would not be possible to judge whether the expert system is working as desired or intended.
The explanation can be generated, by creating a goal tree which has been transverse, as explained by the following example of an expert system to diagnose skin diseases in dogs:
ADVERTISEMENTS:
Rules:
R1:
If the dog is scratching its ears AND the ears are waxy THEN the ears should be cleaned.
R2:
ADVERTISEMENTS:
If the dog is scratching its coat AND if the insects can be seen in the coat AND if the insects are grey. THEN the dog should be treated for lice.
R3:
If the dog is scratching its coat AND if insects can be seen in the coat AND if the insects are black THEN the dog should be treated for fleas.
R4:
ADVERTISEMENTS:
If the dog is scratching its coat AND there is hair loss AND there is inflammation THEN the dog should be treated for eczema.
Suppose a dog has insects in its coat and is scratching its coat. We are required to arrive at the theraptic conclusion. This is the information known about the scratching dog. The combination of the known information (D.B.) and the above rules constitute knowledge Base (kB).
A typical consultation would begin with a request for information. In an attempt to match the conditions of the first rule “is the dog scratching its ears?”, to which the response of the answer is suppose ‘no’. The system would then attempt to match the conditions of rule 2, asking “is the dog scratching its coat?” and the response is ‘yes’, “can you see insects in the coat?” ‘yes’, “are the insects grey?” If our response is ‘yes’ to this question also the system will inform us that the dog needs delousing.
At this point if we asked for an explanation the following style of response which follows from rule 2 would be given:
If the dog is scratching
And if insects can be seen
And if the insects are grey
Then the dog should be treated for lice.
This traces the reasoning of the type shown above, used through the consultation, so that any errors can be identified and justification can be given to the client, if required. The explanation chain shown above is a restatement of the rules used.
In addition to questions such as “how did you reach that conclusion?” The user may require explanatory feedback during a consultation, particularly to clarify what information the system requires. A common request is “why do you want to know that?”, when the system asks for a piece of information. In this case the usual response is to provide a trace up to the rule(s) currently being considered and a restatement of that rule(s).
As an illustration to this imagine that in our horror situation created at discovering crawling insects on our dog, but had not noted the colour of the insects crawling on our dog. We might of course ask to know why the system needs information about the colour of insects.
The response would be of the form:
You said the dog is scratching
And that there are insects.
If the insects are grey
Then the dog should be treated for lice.
This form of explanation facility is far from ideal, both in terms of the way that it provides the explanation and the information to which it has access. In particular it tends to regurgitate the reasoning in terms of rules and goals, which may be appropriate to the knowledge engineer but is less suitable for the user. Ideally, an explanation facility should be able to direct the explanation towards the skill level or understanding of the user.
In addition, it should be able to differentiate between the domain knowledge which it uses and the control knowledge, used to control the search; explanation for users are best described in terms of the domain; those for knowledge engineers in terms of control mechanism. (The reasoning used here is forward reasoning but backward reasoning, can also be deployed). The terms knowledge engineer and ‘user’ would be explained at the close of this section.
Self training is another goal of expert systems (e. s.). When an e. s. derives a new fact through its inference procedure this new fact can be added to its kb, as well given to the user. The self-training facility accepts the facts (derived) developed by inference mechanism and compares them with the rules stored in the k.b. If the derived fact(s) is (are) not present in the db they get added there, giving rise to dynamic db or (kb). This is how the knowledge of the expert system is upgraded.
For these reasons researchers have looked for alternative mechanisms providing explanation. One approach is to maintain a representation of the problem-solving process used in reaching the solution as well as the domain knowledge. This provides a context for the explanation- the user knows not only which rules have been fired but what hypothesis was being considered.
The XPLAIN system, links the process of explanation with that of designing the expert system. The system defines domain model (including facts about the domain of interest) and domain principles which are heuristics and operators-meta-rules or control rules. This represents the strategy used in the system. An automatic programmer then uses these to generate the system.
This approach ensures that explanation is considered at the early specification stage, and allows the automatic programmer to use one piece of knowledge in several ways (in problem solving strategy, of explanation providing etc.). Approaches such as this recognise the need for meta-knowledge in providing explanation facilities. In order to do this successfully, expert systems must be designed for providing explanation.
Dialogue Component:
The dialogue component is closely linked to the explanation component. One side of the dialogue involves the user questioning the system at any point by using the control strategies and on the other side the system must be able to question the user to establish the existence of evidence. The dialogue component has two functions; determines which question to ask next (using meta-rules and the reasoning mechanism to establish what information is required to fire particular rules) and keep record of the previous questions.
This ensure that unnecessary questions are not asked. For example, it is not helpful to ask for the model of the car of a person when the user has already said that, he or she does not possess a car. It can be further illustrated with a real-time example in B. Tech. scheme of various universities. Unless a student does not clear all papers of the fourth semester he is not allowed to appear in the eight semester. So the question which papers in this current semester he is to clear does not arise if he has not passed in the 4th semester.
The dialogue could be one of three styles:
1. System controlled, where the system drives the dialogue through questioning the user.
2. Mixed control, where both user and system can direct the consultation.
3. User controlled, where the user drives the consultation by providing information to the system.
Most expert systems use the first of these, the rest the second. This is because the system needs to be able to elicit information from the user when it is needed to advance the consultation. In the user controlled dialogue, the system might not get all the information required. Ideally a mixed dialogue should be provided, allowing the system to request further information and the user to ask for “why?” and “how?” explanations at any point.
Architecture of an Expert System:
The architecture of an expert system can be better understood by comparing it with a conventional programming.
The main components in an Expert System Software are knowledge base, inference engine including explanation facility, user interface mechanism and interfacing, whereas the main components of a conventional software are data (database), program code, interpreter/compiler though not obvious to the user and sparse user-interface.
The same difference is exhibited in the table.
By comparing expert system and software programming, we learn that expert system can be considered an advanced form of programming.
As shown in table; the terminology of expert systems can be mapped on a one- to-one basis to that of software programs. Knowledge base matches the program code of a software. Knowledge base is not the database only. A knowledge base is executable but a database is not, it (knowledge base) can be queried and updated.
For an expert system to give intelligent advice about a particular domain it must have access to as much domain knowledge as possible. The component of an expert system which contains the system’s knowledge in codified form is called its Knowledge base. This element of the system is so critical to the way the most expert system are constructed that they are also popularly known as knowledge based systems.
A knowledge base contains both declarative knowledge (facts about objects, events and situations) and procedural knowledge (information about course of action). Depending on the form of knowledge representation chosen the types of knowledge may be separated or integrated. Although many knowledge representation techniques have been used in expert system the most prevalent form of knowledge representation used in expert system is the rule-based production system approach.
Expertize is the extensive, task specific knowledge acquired from training, reading and experience. It enables experts to make better and faster decisions than non-experts in solving complex problems. Expertise takes a long time (usually several years) to a acquire. The goal of expert system is to transfer expertise from an expert to a computer and than to the user.
This process involves four activities:
1. Knowledge acquisition,
2. Knowledge representation,
3. Knowledge inferencing and
4. Knowledge transfer to the user.
Like an interpreter, which evaluates a program in the source code or executes the statements, the inference engine takes the statements in knowledge base and executes them because it contains search control and reasoning mechanisms.
Artificial Intelligence/Expert System languages such as LISP, PROLOG, SMALL TALK can be used to build an empty package of the knowledge base, inference engine or user interface. This package is called an expert system tool or shell, which is not the extension to an operating system such as UNIX. It is a tool to facilitate the rapid development of an expert system. Briefly, we can say that expert system shells are high level programming languages with unconventional conveniences such as explanation or tracing facilities.
In the rule-based systems the procedural knowledge; in the form of heuristic ‘if- then’ production rules is completely integrated with the declarative knowledge. However, not all rules pertain to the system’s domain, some production rules, called meta-rules, pertain to other production rules (or even to themselves). A meta-rule (rule about a rule) helps guiding the execution of an expert system by determining under what conditions certain rules should be considered instead of other rules.
For example, the following meta-rule is from MYCIN:
(i) If there are rules which do not mention the current goal in their premise.
(ii) If there are rules which mention the current goal in their premise.
THEN it is definite that the former should be done before the latter. It may be noted that this meta-rule does not contain knowledge related to MYCIN’S domain, but it does contain knowledge which will help the system determine the order in which the rules should be executed. Further the knowledge is encoded in a symbolic form from which conclusions are drawn through logical or plausible inference rather than by calculations.
By simply having access to a great deal of knowledge does not make us an expert, we must know how or when to apply the appropriate knowledge. Similarly just having a knowledge does not make an expert system intelligent. The system must have another component which directs the implementation of knowledge base. This component of the system is known as control structure, the rule interpreter or inference engine.
The inference engine uses the information provided to it by the K.B. and the user, to infer new facts. It also decides which heuristic search techniques are used in determining how the rules in the knowledge are to be applied to the problem. In fact an inference engine ‘runs’ an expert system, determining which rules are to be invoked, accessing the appropriate rules, executing the rules and determining when an acceptable solution has been found.
The reasoning process can be simple or complicated, depending on the structure of knowledge base. If the knowledge base consists of simple rules and facts, forward chaining suffices. However, for a knowledge which consists of structured frames and rules and unstructured logic (facts, data and variables), both sophisticated forward and backward chaining with well thought-out search strategies may be required. Other methods such as problem reduction, pattern matching, unification etc. are also used to implement the reasoning process.
Mostly, the knowledge is not ‘inter twined with the control structure. This has a value-added advantage, the same inference engine which works well in one expert system may work just as well with a different knowledge base to make another expert system, thus reducing expert system developing time.
For example, Inference engine of MYCIN is available separately as EMYCIN (essential MYCIN). EMYCIN has been used with different knowledge base to create many new expert system, eliminating the need to develop a new inference engine. This will be taken up after having studied expert system shells.
User Interface:
Even the most sophisticated expert system is worthless if the intended user cannot communicate with it. The component of an expert system which communicates with the user is known as user interface. The communication performed by a user interface is bi-directional as shown in the Fig. 12.4.
At the simplest level we, as user, must be able to describe our problem to the expert system and the system must be able to respond with its recommendations. In practice a user interface for the system is expected to explain its ‘reasoning’ or the system may request additional information about the problem from us.
During the design of user interface the important points are:
1. Design of dialog
2. Input/output modes and their effectiveness.
Most user interfaces make heavy use of techniques developed in another artificial intelligence discipline, natural language processing. This technique shows how to communicate with an expert system in ordinary English and enable the computer to respond to us in the same language. This type of user interface is sometimes called a natural language front-end. Even graphics or multi-media interfaces are also used though not very common. This quality is called transparency as compared to black box approach of an algorithm.
The process of building an expert system is often called knowledge engineering It typically involves a special form of interaction between the expert-system builder, called the knowledge engineer, and one or more human experts in some problem area. The knowledge engineer ‘extracts’ from the human experts their knowledge in the form of procedures, strategies, and rules of thumb for problem solving, and builds this knowledge into the expert system, Fig. 12.5. The result is a computer program which solves problem in much the same manner as the human experts (e.s).
The following quote by Paul E. Johnson (1983), a scientist who has spent many year studying the behaviour of human experts quite accurately describes what we mean by the term expert:
An expert is a person who, because of training and experience, is able to do things the rest of us cannot; experts are not only proficient but also smooth and efficient in the actions they take. Experts know a great many things and have tricks and caveats for applying what they know to problems and tasks; they are also good at ploughing through irrelevant information in order to get at basic issues, and they are good at recognising problems they face as instances of types with which they are familiar.
Underlying the behaviour of experts in the body of operative knowledge we have termed expertise. It is reasonable to suppose, therefore, that experts are the ones to consult when we wish to represent the expertise which makes the behaviour of expert systems possible.
Knowledge engineering relies heavily on the study of human experts in order to develop intelligent, skilled programs.
The basic idea of intelligent problem-solving is that a system must construct its solution selectively and efficiently from a space of alternatives. The expert needs to search this space selectively, with as little unfruitful activity as possible. His knowledge spots useful data early, suggests promising ways to exploit them, and helps avoid low-payoff efforts by pruning blind alleys as early as possible. Ultimately the expert system achieves high performance by using knowledge to make the best use of its time.
Thus, the main difference between artificial Intelligence and expert system is that the former provides an approach to problem solving and the later is the product.
Benefits and Limitations of Expert Systems:
Some problems like scheduling manufacturing can not be adequately dealt with using mathematical algorithms. But such problems can be dealt with an intelligent way using E.S. E.S. allow experts to be expert. As e.s. are built by DEs, experts will try to focus on the more difficult problems of their speciality.
This, in turn, will result in solutions to new problems and the range of problems which they can solve will widen. Although expert systems lack the robust and general intelligence of human beings they can provide benefits to organisations if their limitations are well understood; only certain classes of problems can be solved using expert system.
Many expert systems require large lengthy and expensive development efforts. Moreover, expert systems lack the breadth of knowledge and the understanding of fundamental principles of a human expert. Their knowledge bases are quite narrow, shallow and brittle.
In fast moving fields such as medicine or computer science, keeping the knowledge base upto date is a critical problem. Expert systems can only represent limited forms of IF-THEN type of knowledge. Such knowledge exists primarily in textbooks. There are no adequate representations for deep causal models or temporal trends. Expert systems cannot yet replicate knowledge which is intuitive, based on analogy or on common sense.
Contrary to early promises, expert system do best in automating lower-end clerical functions. They can provide electronic check lists for lower-level employees in service bureaucracies such as banking, insurance, sales and welfare agencies.
The applicability of expert system to managerial problems generally involve drawing facts and interpretations from divergent sources, evaluating facts and comparing one interpretation of the facts with another and do not involve analysis or simple classification. They typically perform very limited tasks which can be performed by professional, in a few minutes and hours. Hiring or training more experts may be less expensive than building an expert system.