EVALUATION IN AN INSTRUCTIONAL DESIGN SCENARIO


An important contribution to the e-learning arena is happening in the way of designing, locating and delivering educational contents in the Internet. Learning objects lead the set of theories devoted to the design and development of learning contents, also known as instructional design (Merrill, 1994). A concrete case of a system for the shared creation of knowledge is one dedicated to the development of learning objects. During the process, a number of instructional designers may wish to contribute. They could, for example, make some modification to the structure of a course or add some learning resource to the course contents. Interaction between authors should be coordinated to extend or modify the educational material. Authors try to meet a protocol that reflects their different interaction styles.

Under a constructivist instructional design approach (Koper, 1997), in the creation of educational content, not only must teachers take part, but also the receivers of the training (i.e., students). Therefore, consumers and designers of learning objects have a participative relationship, where the subordinated agents can also participate in the instructional design process, although the final decision lies with the higher-level agents .

Evaluation Scenario

The evaluation scenario consists of a knowledge mart where three agents are producing knowledge in the representation of a docent coordinator ( C 1 ) and two instructors ( I 1 and I 2 ). The goal of agents in the mart is the development of a IMS/SCORM (IMS, 2001) learning object ” a course named ˜XML Programming that fulfills a set of educational objectives. Although there is room in IMS standards for describing each part of the learning object ” e.g., organizations, resources, metadata, etc. ” we will restrict the discussion to the ToC structure, devoting the interaction process n to it.

When authors submit proposals, they will include the differences between both ToCs and will refer to the same interaction. The interaction protocol is executed by every receiving author until the proposal is eventually accepted or replaced with a further elaborated proposal. This process continues until some proposal wins all evaluations, an agreement is reached, or until some degree of consensus is achieved (depending on the kind of interaction, i.e., competitive, negotiating or cooperative). Although the authors' behavior is an asynchronous process, the agents' interaction protocol helps to synchronize their operations.

Objectives

These are the educational objectives that define the preference relation used to evaluate proposals about the ToC of the course:

  1. Ability to program XHTML (i.e., XML-generated HTML) web applications

  2. Ability to program server-side web applications

  3. Ability to program XML data exchange applications

The degree of fulfillment of educational objectives is modeled as a threecomponent vector x = ( x 1 , x 2 ,x 3 ), with x k ˆˆ I=[0,1] for k =1,2,3. Let f : I 3 I be a numerical measure of how well a proposal meets the objectives.

Evaluation Criteria

The relevance of a proposal is graded by the fulfillment of the educational objectives described above. All objectives being equally satisfied, the rank of the agent will decide (coordinator is higher that instructor). If ranks are the same, the time when the proposal was issued will decide. To determine the instant of generation, every proposal will include a time-stamp.

Each proposal p is described by a three-component vector ( p 1 , p 2 , p 3 ), where:

  • p 1 = f ( x ) measures the degree of fulfillment of educational objectives.

  • p 2 is the numerical rank held by the submitter agent.

  • p 3 is a time-stamp.

Notation for Proposals

To simplify the notation, proposals are represented by x ij , where x i is an identification of the author, and j is a sequence number ordered by the instant of generation of the proposal. Using this notation, the following proposals will be elaborated by agents in the mart:

  • i 11 : Create a unique chapter for "XML script programming".

  • i 12 : Divide "XML script programming" into two new chapters: "Clientside XML script programming" and "Server-side XML script programming".

  • i 21 : Add a chapter about "Document Type Definitions (DTD)".

  • i 22 : Add a chapter about "DTD and XML schemas".

  • c 11 : Add a chapter about "Using XML as data".

Preference Relationship

A preference relationship > is defined between any two proposals p = ( p 1 , p 2 , p 3 ) and q = ( q 1 , q 2 , q 3 ):

The preference relation given in (1) defines a partial order, where i 11 < i 12 < i 21 < i 22 < c 11 , in accordance with the evaluation criteria.

Sequence of Events

The sequence of events generated by agents in the mart is made up of three acts , which are depicted in Figure 2 and traced as follows :

  1. I 1 starts by sending a proposal i 11 . When i 11 is about to be consolidated, I 2 agent will issue a better evaluated proposal i 21 . C 1 does nothing and silently accepts every proposal that comes to it.

  2. I 1 and I 2 elaborate two respective proposals i 12 and i 22 , approximately at the same time during the distribution phase. The proposal from I 2 has a better evaluation than I 1 's, and both are better than those in the first act.

  3. C 1 builds and sends the best-evaluated proposal of this scene, which will eventually win the evaluation.

click to expand
Figure 2: Sequence of Events for the Learning Object Evaluation Scenario

The series of messages exchanged during act (2) is depicted in more detail in Figure 3 and is described as follows:

  1. Both I 1 and I 2 receive each other's proposal and begin the distribution phase, therefore starting timeout t . Proposals i 12 and i 22 also arrive at C 1 , which is not participating in the process and silently receives them.

  2. I 1 compares i 22 to i 12 , turning out that its proposal has a worse evaluation. It is reasonable that an evaluation of proposal i 12 obtains a higher value than i 22 , as for the second objective described above. Concerning the first and third objectives, any relevance function should result in similar values for both proposals, so they would not be decisive . Then, I 1 starts timeout t 1 , giving i 22 a chance to be consolidated. On the other hand, I 2 also compares both proposals and reminds I 1 of the results by again sending i 22 , then extending timeout t in order to give a chance for other agents' proposals to come.

  3. When timeout t expires , I 2 sends a consolidation message for i 22 that arrives to every agent in the mart. At the reception , I 1 finishes the protocol because it is expecting the consolidation for i 22 . C 1 simply accepts the notification.

  4. Finally, at the expiration of t 1 , I 2 is notified about the end of the consolidation phase for i 22 , and its execution of the protocol finishes successfully. Therefore, every agent in the mart will eventually know about the consolidation of the proposal.

click to expand
Figure 3: Execution Example of the Interaction Protocol

These tests have been carried out to examine how far the multi-agent architecture facilitates the coordination of a group of actors that are producing learning objects. In this educational scenario, quantitative measurements of performance and throughput were taken concerning observable actions and behaviors about a number of aspects, such as conflict-solving facility, effectiveness in coordinating the production process, fair participation of members from other groups, overall quality of the final results, and speed and quality of the generated knowledge.

In this context, the agent-mediated solution has been found to facilitate the following aspects of the distributed creation of learning objects during the instructional design process:

  • Bring together instructional designers' different paces of creation.

  • Take advantage of designers' different skills in the overall domain and tools that are managed.

  • Reduce the number of conflicts provoked by interdependencies between different parts of the learning objects.

  • In a general sense, avoid duplication of effort.




(ed.) Intelligent Agents for Data Mining and Information Retrieval
(ed.) Intelligent Agents for Data Mining and Information Retrieval
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 171

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net