Facilitator Agent in Mindmap Building Tool

 < Day Day Up > 



In this section, the design and implementation of the facilitator agent in Mindmap Building Tool will be described.

Mindmap Building Tool

The Mindmap program is a collaborative tool where distributed users model their conceptual understanding of a topic. The workspace consists of an individual and a shared Mindmap. Users can switch between these two maps by pressing the “teleview” buttons. Based upon each individual contribution in their personal Mindmaps, the group members must negotiate and agree upon one common representation (Buzan, 2000). The main purpose of this program is for users to have a meeting place where they can brainstorm and build individual Mindmaps and joint Mindmaps.

The agent has two roles in this program. The first is to monitor user actions. This includes saving data into log files, updating the internal representation of the environment. The second role of the agent is to function as a coordinator, meaning it will not contribute to the tasks but will facilitate the process. To accomplish this task, the agent analyzes the data contained in the internal representation, which consist of user actions and compares them with earlier agent interactions. This might result in the agent giving warning messages, providing meta-information about the collaboration process, giving initiatives to start discussions, and encouraging passive members to participate more.

Facilitator Agent Design

To reduce the amount of inappropriate messages presented to users, we designed a two-layer structure (content layer and presentation layer) in the agent architecture, which was implemented in three modules: advice generation, advice selection, and presentation selection modules (Figure 8). The advice generation module is responsible for generating the content of advice. The advice selection module selects which advice to present to students, while the presentation selection module is mainly concerned about how the chosen content should be presented. All modules utilize the agent context in the database, which is sequential information about user actions, agent interactions, and corresponding user reactions throughout the session and the rules in the knowledge base, which contains the expertise of instructors in facilitating collaboration.

click to expand
Figure 8: Architecture of facilitator agent in Mindmap building tool

All user actions considered important are automatically stored as the agent context in the database. An aggregation of user actions at some time (t) will make up an internal contextual state in the agent context. The agent evaluates the current context state and how it differs from the last three states (t - 1, t - 2, t - 3). Based on the gathered information, the facilitator compares the refined state with the content rules in the database. Each rule that fires will be evaluated independently, which means that the agent can generate several possible outputs. All outputs are seen as opportunities to interact with the users. Then the agent transfers focus to the advice selecting module.

In the advice selection module, the facilitator agent specifies the importance of each advice. The rating of a rule is calculated as a function of that rule’s initial importance rating, modified by the frequency of appearance in the agent context and user reactions. To illustrate this, the contextual history can typically tell the agent that a certain advice has been given to that specific user two times before. The agent uses this information to degrade the potential output to a less important level. Thus, the same request for output could have a varying level of importance over time. When the phase of rating advice is over, the advice selection module compares the advice and decides to display the advice with the highest rated score.

When the facilitator has decided which advice to present, it has to decide how this advice should be presented. The rated score of an advice determines to some degree which presentation form to use. Possible presentation techniques are either synchronous, such as dialogue boxes or fixed agent output areas, or asynchronous (e-mail or SMS). For urgent advice, the facilitator will take the form of dialogue boxes, while less important advice is presented in the fixed agent output area. The presentation selection module is also designed to be able to choose appropriate colors for presenting different advice, where strong (and sometimes blinking) colors in the background can indicate a high degree of importance. In addition, the agent can supply critical advice with a warning alarm. Sounds can direct the attention of a learner directly to the agent, and this is especially useful if the agent notices that a user is using another program and there are important events happening in the Mindmap. On the other hand, the facilitator agent should be careful when presenting advice with sounds, because they can be intrusive and create unintended breakdown situations.

For example, if the advice presented in the dialogue box is evaluated for the third time in a row, it would get a lower importance rating based on the agent context, and thus probably not qualify to be presented in a dialogue box. If the advice competed with other potential outputs, these outputs might be rated with a higher score, thus beating the displayed advice in the dialogue box. It is also important to prevent that low-rated advice from being displayed to the users. To hinder this, all potential advice that is not displayed will get a level of importance increase for each time they are ignored. The danger of this tactic could be that when an advice is finally displayed, it will always be displayed in a dialogue box, thus disturbing the collaboration. To prevent this, each rule that generates a certain advice must also contain a threshold for what level of importance a specific advice must reach before it can take a certain form. To summarize, we have three factors that decide advice’s level of importance:

  1. An advice’s initial level of importance

  2. The advice’s current level of importance, calculated by the rule rating function

  3. An independent estimate for what degree of importance an advice must reach to take a certain form

We believe that such an approach is necessary when designing the agent architecture for frequency of agent interaction and how this interaction should be presented to collaborative learners/users.

Evaluation

The Mindmap Building Tool has been tested iteratively during development by a test group consisting of two Ph.D. students and one expert user. Testing of the implemented agent has not been the focus of this group, and we wanted to get more hands-on data about how the agent prototype would work in a distributed collaborative setting. Precisely, we wanted to see how students react to the system and the agent without giving them any prior knowledge of the tools. Thus, we conducted a formative usability test using techniques such as observation, concurrent verbal protocols, and interviews focusing on attitude measures (Booth, 1995).

In order to conduct the formative usability test, three Master students and one Ph.D. student collaborated to solve a common problem. First, the participants were given a short briefing (about 20 minutes) about the ideas behind the system. Then they went into different computer labs to simulate a distributed setting. The assignment was to brainstorm individually about how to design an intelligent system and then meet in the shared Mindmap workspace to build a joint and agreed-upon solution to the initial problem. The test was arranged at the end of a course in INFO281 (Artificial Intelligence), so all the participants were knowledgeable to some extent about the topic. The project lasted for about four hours.

The feedback from the participants was positive, in general. In the meantime, we also received critique about the functionality and presentation of the facilitator agents. Further analysis of the data collected is being carried out.



 < Day Day Up > 



Designing Distributed Environments with Intelligent Software Agents
Designing Distributed Learning Environments with Intelligent Software Agents
ISBN: 1591405009
EAN: 2147483647
Year: 2003
Pages: 121

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net