A recent editorial in New Architect (Asaravala, 2002) discussed the shift from viewing the Internet as a communications network to regarding it as an application development platform. We are moving beyond simply accessing documents on the Web to accessing applications and services. Sun Microsystems's famous slogan, "the network is the computer," is getting closer to reality as more and more applications are running over the network, moving away from the traditional computing paradigm where applications run on individual computers. In his recent book, Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web, Berners-Lee (2000) envisions an intelligent Web where applications become autonomous intelligent agents. He calls the futuristic network the "Semantic Web," consisting of cooperating computers. The first step of creating the Semantic Web is to construct or convert data to a form that "machines can naturally understand" (Berners-Lee, 2000, p. 177). This is the same concept discussed earlier in this chapter; that is, ontology is the foundation for multiagent systems. What Bernes-Lee has proposed goes beyond the organizational intelligence infrastructure. The Semantic Web, with its cooperating intelligent computers (agents), will become the intelligence infrastructure of the society.
Knowledge management (KM) can be broadly defined as the collection of methods and tools for capturing, codifying, storing, organizing, and disseminating knowledge and expertise of the organization. Successful application of knowledge management requires the understanding and constructive use of organizational learning. All traditional systems, including knowledge management systems, need to be integrated with multiagent systems to create an intelligence infrastructure. Thus, we can consider knowledge management as part of the intelligence infrastructure. From the KM-centric point of view, intelligent agents can enhance nearly all functionalities of KM. Agents help in knowledge search and discovery, acquisition and formalization, assembly and delivery, and personalization.
One of the key requirements of intelligence infrastructure is open-standards based communication of intelligent agents. A major challenge to agent communication is that, just like humans, computer agents may have different knowledge, abilities, and belief systems. Thus, a common ontology is needed so that agents will be able to understand one another. Furthermore, agent communication language (ACL) must be standardized so that agents from different parties can interoperate.
The Knowledge Query and Manipulation Language (KQML) was developed by Finin et al. (1994) under the DARPA-sponsored Knowledge Sharing Effort. KQML has an informal semantics, resulting in varied implementations. The Foundation for Intelligent Physical Agents (FIPA) standard is newer and has a formal semantics. While KQML and FIPA standards are widely accepted in multi-agent system research and development, their applicability to intelligence infrastructure may be limited as they do not have the amenities to deal effectively with existing information and knowledge systems. Intelligent agents based on emerging standards such as XML and XML Web services may be more appropriate for intelligence infrastructure.
Ease of use is one of the key factors in information technology diffusion (Davis, 1989). Special emphasis must be given to interface agent design in developing intelligence infrastructure. In recent years, intelligent agents that communicate with users in natural languages, such as English, have been deployed to provide information, entertainment, and help in a number of commercial websites. For example, Ford Motor Company has used virtual representatives to provide online technical and support assistance to its network of dealers in the U.S. and Canada (Proffitt, 2001). Similarly, Oracle uses a virtual technician to provide tech support, Proctor & Gamble used a virtual agent to "humanize" its Mr.Clean, and GlaxoSmithKline uses a Virtual Representative to provide customer service for its Nicorette and Nicoderm products. Companies such as Nativeminds (www.nativeminds.com), Agentland.com (www.agentland.com), Artificial Life (www.artificial-life.com), Extempo (www.extempo.com), and Kiwilogic (www.kiwilogic.com) commercially develop and market customized virtual salespeople that can simulate real-life interactions with humans. Many of the interface agents have started to use anthropomorphic user interfaces, with human face images that have facial expressions synchronized with the user activities. For example, eye movements that follow the user mouse cursor and voice user interface (VUI) are gaining popularity. Oddcast Media Technologies (http://www.oddcast.com) has talking intelligent agents deployed by well-known companies such as Xerox, BMG, MTV, and Toyota Motors. Vision Point Media (http://www.visionpointmedia.com) provides rich media e-mail with animated characters and voice messages.
The phenomenal growth of the World Wide Web has been attributed to its simplicity and open architecture. XML Web services, with similar simplicity and open standards, are predicted to be the next revolution in network applications. While HTML makes information on the Web accessible to anyone with an Internet connection, XML Web services make services such as transaction processing, business intelligence, and language translation accessible through the Web. IBM has defined Web services as "self-contained, self-describing, modular applications that can be published, located, and invoked across the Web" (http://www-106.ibm.com/developerworks/webservices/). The basic foundation of Web services is XML plus HTTP, although there are other more specific standards such as SOAP (Simple Object Access Protocol), WSDL (Web Service Description Language), and UDDI (Universal Description, Discovery and Integration).
The open architecture of Web services provides cross platform support for developing multi-agent systems in an intelligence infrastructure. Intelligent agents will be able to traverse seamlessly across corporate network boundaries to consume potentially vast amounts of Web services available on the World Wide Web. In less than two years since its introduction, Web services have gained wide industry support. For example, Google, Inc. has opened its database (with more than two billion Web pages) through Web services, allowing many of its search engine functions to be programmatically invoked by applications from other organizations. There is great potential for intelligent agents to offer and/or consume Web services over the Internet and Internet standards-based corporate networks.
There are many legal and ethical issues surrounding intelligent agent applications. Although the discussion on the social, emotional, or spiritual aspects of intelligent agents can be provocatively entertaining, we will mention only legal and ethical issues from a technical point of view. Those issues include, but are not limited to, the following:
Authority: Intelligent agents work on behalf of their masters. When a user delegates certain responsibilities to an agent, he/she must specify the boundaries of the authority given to the agent.
Trust: A trust relationship must be established between users and intelligent agents, and between intelligent agents. When a user relinquishes certain responsibilities to an agent, the agent must have verifiable accountability. The agent needs to know what responsibilities entrusted to it can be re-delegated to other intelligent agents.
Security: Both the user and intelligent agents need to be authenticated before any interaction between them. Agents must reveal their identities and their entrusted responsibilities only to authorized users.
Privacy: Intelligent agents should not disclose a user's private information such as affiliation and e-mail addresses unless authorised to do so. The user needs to be aware of what kind of information the intelligent agent(s) will exchange with other users or intelligent agents in order to accomplish a given task.
Audit and control: Intelligent agents should behave within a set of guidelines. For example, they must obey their masters, but not if carrying out their order may lead to harmful results (see, for example, Asimov's three laws of robotics). There must be control and audit mechanisms built into the intelligence infrastructure so that the actions of the intelligent agents can be traced and corrected if necessary.