Deliberative agent
Encyclopedia
Deliberative agent is a sort of software agent
used mainly in multi-agent system simulations
. According to Wooldridge's definition, a deliberative agent is "one that possesses an explicitly represented, symbolic model of the world, and in which decisions (for example about what actions to perform) are made via symbolic reasoning".
Compared to reactive agents
, which are able to reach their goal only by reacting reflexively on external stimuli, a deliberative agent's internal processes are more complex. The difference lies in fact, that deliberative agent maintains a symbolic representation
of the world it inhabits. In other words, it possesses internal image of the external environment and is thus capable to plan its actions. Most commonly used architecture for implementing such behavior is Belief-Desire-Intention software model (BDI), where an agent's beliefs about the world (its image of a world), desires (goal) and intentions are internally represented and practical reasoning is applied to decide, which action to select.
There has been considerable research focused on integrating both reactive and deliberative agent strategies resulting in developing a compound called hybrid agent, which combines extensive manipulation with nontrivial symbolic structures and reflexive reactive responses to the external events.
The process of plan computing (or recomputing) is as follows:
The deliberative agent requires symbolic representation with compositional semantics (e. g. data tree) in all major functions, for its deliberation is not limited to present facts, but construes hypotheses about possible future states and potentially also holds information about past (i.e. memory). These hypothetic states involve goals, plans, partial solutions, hypothetical states of the agent's beliefs, etc. It is evident, that deliberative process may become considerably complex and hardware killing.
. It soon became obvious that STRIPS concept needed further improvement, for it was unable to effectively solve problems of even moderate complexity. In spite of considerable effort to raise the efficiency (for example by implementing hierarchical and non-linear planning), the system remained somewhat weak while working with any time-constrained system.
More successful attempts have been made in late 1980s to design planning agents. For example the IPEM (Integrated Planning, Execution
and Monitoring system) had a sophisticated non-linear planner embedded. Further, Wood's AUTODRIVE simulated a behavior of deliberative agents in a traffic and Cohen's PHOENIX system was construed to simulate a forest fire management.
In 1976, Simon and Newell formulated the Physical Symbol System hypothesis
, which claims, that both human and artificial intelligence have the same principle - symbol representation and manipulation. According to the hypothesis it follows, that there is no substantial difference between human and machine in intelligence, but just quantitative and structural - machines are much less complex. Such a provocative proposition must have become the object of serious criticism and raised a wide discussion, but the problem itself still remains unsolved in its merit until these days.
Further development of classical symbolic AI proved not to be dependent on final verifying the Physical Symbol System hypothesis at all. In 1988, Bratman, Israel and Pollack introduced Intelligent Resource-bounded Machine Architecture (IRMA), the first system implementing the Belief-Desire-Intention software model (BDI). IRMA exemplifies the standard idea of deliberative agent as it is known today: a software agent embedding the symbolic representation and implementing the BDI.
Even though deliberative agents consume much more system resources than their reactive colleagues, their results are significantly better just in few special situations, whereas it is usually possible to replace one deliberative agent with few reactive ones in many cases, without losing a substantial deal of the simulation result's adequacy. It seems that classical deliberative agents may be usable especially where correct action is required, for their ability to produce optimal, domain-independent solution. Deliberative agent often fails in changing environment, for it is unable to re-plan its actions quickly enough.
Software agent
In computer science, a software agent is a piece of software that acts for a user or other program in a relationship of agency, which derives from the Latin agere : an agreement to act on one's behalf...
used mainly in multi-agent system simulations
Multi-agent system
A multi-agent system is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve...
. According to Wooldridge's definition, a deliberative agent is "one that possesses an explicitly represented, symbolic model of the world, and in which decisions (for example about what actions to perform) are made via symbolic reasoning".
Compared to reactive agents
Reactive planning
In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments....
, which are able to reach their goal only by reacting reflexively on external stimuli, a deliberative agent's internal processes are more complex. The difference lies in fact, that deliberative agent maintains a symbolic representation
Physical symbol system
A physical symbol system takes physical patterns , combining them into structures and manipulating them to produce new expressions....
of the world it inhabits. In other words, it possesses internal image of the external environment and is thus capable to plan its actions. Most commonly used architecture for implementing such behavior is Belief-Desire-Intention software model (BDI), where an agent's beliefs about the world (its image of a world), desires (goal) and intentions are internally represented and practical reasoning is applied to decide, which action to select.
There has been considerable research focused on integrating both reactive and deliberative agent strategies resulting in developing a compound called hybrid agent, which combines extensive manipulation with nontrivial symbolic structures and reflexive reactive responses to the external events.
How does deliberative agent work?
It has already been mentioned, that deliberative agents possess a) inherent image of an outer world and b) goal to achieve and is thus able to produce a list of actions (plan) to reach the goal. In unfavorable conditions, when the plan is no more applicable, agent is usually able to recompute it.The process of plan computing (or recomputing) is as follows:
- a sensory input is received by the belief revision function and agent's beliefs are altered
- option generation function evaluates altered beliefs and intentions and creates the options available to the agent. Agent's desires are constituted.
- filter function then considers current beliefs, desires and intentions and produces new intentions
- action selection function then receives intentions filter function and decides what action to perform
The deliberative agent requires symbolic representation with compositional semantics (e. g. data tree) in all major functions, for its deliberation is not limited to present facts, but construes hypotheses about possible future states and potentially also holds information about past (i.e. memory). These hypothetic states involve goals, plans, partial solutions, hypothetical states of the agent's beliefs, etc. It is evident, that deliberative process may become considerably complex and hardware killing.
History of a concept
Since the early 1970, the AI planning community has been involved in developing artificial planning agent (a predecessor of a deliberative agent), which would be able to choose a proper plan leading to a specified goal. These early attempts resulted in constructing simple planning system called STRIPSSTRIPS
In artificial intelligence, STRIPS is an automated planner developed by Richard Fikes and Nils Nilsson in 1971. The same name was later used to refer to the formal language of the inputs to this planner...
. It soon became obvious that STRIPS concept needed further improvement, for it was unable to effectively solve problems of even moderate complexity. In spite of considerable effort to raise the efficiency (for example by implementing hierarchical and non-linear planning), the system remained somewhat weak while working with any time-constrained system.
More successful attempts have been made in late 1980s to design planning agents. For example the IPEM (Integrated Planning, Execution
and Monitoring system) had a sophisticated non-linear planner embedded. Further, Wood's AUTODRIVE simulated a behavior of deliberative agents in a traffic and Cohen's PHOENIX system was construed to simulate a forest fire management.
In 1976, Simon and Newell formulated the Physical Symbol System hypothesis
Physical symbol system
A physical symbol system takes physical patterns , combining them into structures and manipulating them to produce new expressions....
, which claims, that both human and artificial intelligence have the same principle - symbol representation and manipulation. According to the hypothesis it follows, that there is no substantial difference between human and machine in intelligence, but just quantitative and structural - machines are much less complex. Such a provocative proposition must have become the object of serious criticism and raised a wide discussion, but the problem itself still remains unsolved in its merit until these days.
Further development of classical symbolic AI proved not to be dependent on final verifying the Physical Symbol System hypothesis at all. In 1988, Bratman, Israel and Pollack introduced Intelligent Resource-bounded Machine Architecture (IRMA), the first system implementing the Belief-Desire-Intention software model (BDI). IRMA exemplifies the standard idea of deliberative agent as it is known today: a software agent embedding the symbolic representation and implementing the BDI.
Efficiency of deliberative agents compared to reactive ones
Above-mentioned troubles with symbolic AI have led to serious doubts about the viability of such a concept, which resulted in developing an reactive architecture, which is based on wholly different principles. Developers of the new architecture have rejected using symbolic representation and manipulation as a base of any artificial intelligence. Reactive agents achieve their goals simply through reactions on changing environment, which implies reasonable computational modesty.Even though deliberative agents consume much more system resources than their reactive colleagues, their results are significantly better just in few special situations, whereas it is usually possible to replace one deliberative agent with few reactive ones in many cases, without losing a substantial deal of the simulation result's adequacy. It seems that classical deliberative agents may be usable especially where correct action is required, for their ability to produce optimal, domain-independent solution. Deliberative agent often fails in changing environment, for it is unable to re-plan its actions quickly enough.
See also
- Multi-agent systemMulti-agent systemA multi-agent system is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve...
- Artificial IntelligenceArtificial intelligenceArtificial intelligence is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its...
- Software agentSoftware agentIn computer science, a software agent is a piece of software that acts for a user or other program in a relationship of agency, which derives from the Latin agere : an agreement to act on one's behalf...
- Intelligent agentIntelligent agentIn artificial intelligence, an intelligent agent is an autonomous entity which observes through sensors and acts upon an environment using actuators and directs its activity towards achieving goals . Intelligent agents may also learn or use knowledge to achieve their goals...