Multiagent System Architecture

 < Day Day Up > 



A COLE is a complex system in which human learning activity continuously generates new patterns of behavior and knowledge. In a unique intelligent system, assembling all the necessary knowledge (expertise) needed to perform complex tasks normally done by humans is difficult. Distributed architectures seem to be most appropriate for handling the intelligence in complex systems.

The agents of a COLE must be based on the dynamic environment and learn with it in order to help its users. This requirement suggests the use of a distributed open architecture that will allow the system to adapt itself to different contexts by adding new services. Multiagent systems have a distributed nature that allows for local reasoning and dynamic integration of new agents; they are able to evolve with the kind of complex systems that were described.

Multiagent systems can be classified according to their architectures (overall organization), the degree of autonomy of each agent, the types of protocols they use to communicate, or their complexity. There is a major distinction concerning reactive versus autonomous agents. Reactive agents are simple without any representation of their environment. They interact by stimulus-response type behavior (Ferber & Drogoul, 1992). Thus, intelligent behaviors can emerge from a population of numerous agents (Brooks, 1991). On the other hand, an autonomous agent is complex:

  • It is a specialized system and can function by itself. It has a (partial) model of its environment and acts in accordance with this model. Complex agents may have intentions to guide their behavior (Scalabrin et al., 1996).

  • It is conceived in order to satisfy objectives automatically by interacting with the environment in which it has been placed (Beer, 1992).

  • It is an agent with an existence that is independent of that of other agents (Demazeau & Muller, 1990).

  • It can act without the direct intervention of human beings or other agents and has some degree of control over its actions. An autonomous agent has an internal state, can make decisions, has preferences and its own objectives, is able to make decisions about its objectives (it can solve internal conflicts), and may adopt other agents’ objectives by using criteria based on its own objectives (Castelfranchi, 1990).

Wooldridge and Jennings (1995) identified a set of properties common to the different classes of agents. For Wooldridge and Jennings, an “agent” is a physical computational system or (more frequently) a logical one that has the following properties:

  • Autonomy: Agents should be able to perform the majority of their problem- solving tasks without the direct intervention of humans or other agents, and they should have a degree of control over their own actions and their own internal state.

  • Social ability: Agents should be able to interact, when they deem appropriate, with other software agents and humans in order to complete their own problem solving and to help others with their activities, where appropriate.

  • Responsiveness: Agents should perceive their environments (which may be the physical world, a user, a collection of agents, the Internet, etc.) and respond in a timely fashion to changes that occur in it.

  • Proactiveness: Agents should not simply act in response to their environments; they should be able to exhibit opportunistic, goal-directed behavior and take the initiative when appropriate.”

The various systems proposed today differ in their overall architectures, communication possibilities, and complexity of the basic agents.

  • Blackboard systems allow several specialists (often called “knowledge sources”) to interact through shared data (posted on the blackboard). Normally, communication occurs only through the shared data and leads to a form of strong coupling and possibilities of bottlenecks (Gasser et al., 1987; Hayes-Roth, 1988).

  • In federated multiagent systems (Genesereth & Ketchpel, 1994), complex agents called “facilitators” organize the work among simpler agents that notify the facilitator of the tasks they are able to handle, and, when an agent sends a request to the facilitator, the latter finds a competent agent to execute the task. Some examples of such architectures are the ABSI (Singh, 1994), the SHADE matchmaker (Kuokka & Harada, 1995) used in the SHADE project (McGuire et al., 1993), and the Knowledgeable Community (Nishida & Takeda, 1993). Facilitator architectures rationalize communication resources. However, because a facilitator operates as a bridge among agents, its failure may prevent communication among them.

  • “Democratic” multiagent systems like the ARCHON project (Cockburn & Jennings, 1995) or the OSACA approach (Scalabrin, 1996) gather agents that all have the same status (i.e., all are first-class agents).

Agents perform collective actions (Bond & Gasser, 1988; Wooldridge & Jennings, 1994). Communication is an important issue and is usually asynchronous and performed by means of various protocols. For example, the Knowledge Query and Manipulation Language (KQML) is a language using performatives that may express agents’ beliefs, needs, and preferred modalities of communication (Finin et al., 1993). The Cooperation Language (CooL) allows agent communication via a set of message types, called “cooperation primitives” (Kolb, 1995). The principle of cooperation in CooL is the Contract-Net Protocol (Smith, 1980).

The internal model of an agent is also an important feature. Knowing the other agents allows an agent to determine what agent is able to answer its needs. The Smith model models skills (Smith & Davis, 1981); the Cammarata and Steeb model models goals and plans (Cammarata et al., 1983); the Cohen model models knowledge about responsibilities (to reduce task-allocation problems), communication (language, protocol), and beliefs (Cohen & Perrault, 1988); the Durfee model models the available resources and needs (Durfee et al., 1987); and the Georgeff model models the actions (Georgeff, 1983) of other agents. The internal model allows agents to identify themselves either to other agents (Gasser et al., 1987) or to a facilitator (Kuokka & Harada, 1995). The information can also be written into the other agents during the development of the multiagent system (Cockburn & Jennings, 1995).



 < Day Day Up > 



Designing Distributed Environments with Intelligent Software Agents
Designing Distributed Learning Environments with Intelligent Software Agents
ISBN: 1591405009
EAN: 2147483647
Year: 2003
Pages: 121

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net