Formative Evaluation


Formative Evaluation

Formative evaluation begins during the analysis stage, continues through the selection and design of interventions and, if a pilot stage is included in the intervention plan, may extend into early implementation. Formative evaluation is set apart from summative or confirmative evaluation because it is "a quality control method to improve, not prove ...effectiveness." [25] Formative evaluation is conducted to improve the design of performance intervention packages. The term performance intervention package is defined as "any combination of products and procedures designed to improve the performance of individuals and/or organizations." [26] This section will discuss when and how to use formative evaluation to validate that the performance intervention package is:

  • Designed to do what the designers/developers promise it will do.

  • Grounded in the mission and values of the organization.

  • Aligned with the objectives of the performance improvement effort.

Definition and Scope

Formative evaluation is largely defined by its purpose. By any other name , formative evaluation would be called continuous improvement or quality control. Originally coined to describe a "systematic process of revision and tryout" [27] to improve curriculum and instruction, formative evaluation has become a major technique for ensuring quality and consistency of performance improvement processes. Formative evaluation is diagnostic and is "used to shape or mold an ongoing process...to provide information for improvement... " [28] The word improve is key to understanding why formative evaluation is such an important tool in the PT practitioner's toolkit. "The immediate output of formative evaluation is an improved [performance intervention] package that provides consistent results." [29]

Purpose

Traditionally, formative evaluation takes place between the design and implementation of instruction, which would position it between the intervention selection and design and the implementation phases of the HPT Model. However, PT practitioners are beginning to take a less traditional look at when to conduct formative evaluation. Here are some of their views:

  • Begin at the beginning, during the analysis stage, and continue throughout all the phases of the HPT Model. [30]

  • Think about formative evaluation as a "continuous process incorporated into different stages of development of the intervention." [31]

  • Integrate formative evaluation with all four levels of Kirkpatrick's Evaluation Model (see Figure 7-5). Formative evaluation is usually associated with level one (immediate reaction) and level two (immediate knowledge and skill acquisition) of Kirkpatrick's model. Integrating formative evaluation with level three (on-the-job transfer) and level four (organizational results) "is consistent with current approaches to performance technology, and provides an opportunity for the designer to become knowledgeable about the workplace, and to use that knowledge to facilitate the transfer of learning from the classroom to the performance context." [32]

  • Consider formative evaluation as "an ongoing procedure for updating and upgrading the (performance improvement) package after it has been implemented in the workplace." [33] For those readers who are involved with computer information systems, the process is similar to maintaining and upgrading a computer system throughout its life cycle. (Long-term formative evaluation is discussed in the section on confirmative evaluation.)

Conducting a Formative Evaluation

There are four typical methods for conducting formative evaluation: expert review, one-to-one, small group , and field test. [34] Although these methods are traditional to the field of instructional systems design, they may also be adapted to the PT environment. Keep in mind that the methods described below are used to review the entire performance intervention package ”products and procedures ”throughout the entire PT process.

1. Expert Review

A content or performance expert provides information that aids in the selection or design of the intervention and/or reviews draft components of the intervention before implementation. The PT practitioner (or evaluator ) then "reviews the review," clarifies any remaining issues, and revises the intervention.

2. One-to-One Evaluation

A potential performer or user reviews draft components of the selected or designed intervention before implementation. The PT practitioner takes part in the review and revises the intervention as needed.

3. Small-group Evaluation

Potential performers or users review draft components of the selected or designed intervention before implementation. The PT practitioner may or may not participate directly in the review, but is responsible for establishing what the group will focus on during the review, clarifying issues that arise during the review, and making the necessary revisions,

4. Field Test Evaluation

The selected or designed intervention is tried out with target performers/users before full-scale implementation. This method is frequently followed by a debriefing session involving the PT practitioner, who then makes any necessary revisions.

Alternative Methods

Despite the usefulness and proven validity of traditional methods for conducting formative evaluation, [35] Tessmer lists two major factors that call for alternatives:

  1. Special circumstances such as time or resource pressure, geographic distances, complexity of performance, or political goals, may require altering the basic methods.

  2. Computer and electronic communication technologies have created new tools for gathering and evaluating information that expand basic methods for conducting formative evaluation.

Table 7-4 outlines the traditional and the alternative methods suggested by Tessmer.

Table 7-4: TRADITIONAL AND ALTERNATIVE FORMATIVE EVALUATION METHODS

Traditional Method

Alternative Method

  1. Expert Review

1-A Self-evaluation

1-B Panel Reviews

  1. One-to-One Evaluation

2-A Two-on-One Evaluation

2-B Think- aloud Protocols

2-C Computer Interviewing

  1. Small-group Evaluation

3-A Evaluation Meetings

3-B Computer Journals and Networks

  1. Field-test Evaluation

4-A Computer Journals and Networks

4-B Rapid Prototyping

The alternative methods help the PT practitioner to customize traditional formative evaluation processes to fit the context in which the package was designed and will be implemented. No matter which alternative is selected and implemented, the outcome is that the PT practitioner (in the role of evaluator/designer/developer) guides the focus and criteria for the evaluation process and revises the performance improvement package based on input from expert performers. The following discussion is based in part on Tessmer and Thiagarajan. [36]

1-A Self-evaluation (Initial Debugging)

The designer, developer, or several members of the design team evaluate the intervention before presenting it to experts or performers for evaluation. This process is frequently called an internal review and is conducted before presenting material to the client for the review or the review- revise -approve cycle or external review. For self-evaluation to work effectively, the designer or developer should complete the following tasks :

  • Develop a set of evaluation criteria. The criteria may be the same as or different from the criteria set for the expert, performer, or client review, but it should include all the items the client or external reviewers will focus on, plus any design or development issues that the design team needs to resolve.

  • Set the intervention material aside for several days to gain distance from the intervention's content and intent.

  • Literally become the performer and try out the intervention.

  • Record both positive and negative feedback.

1-B Panel Reviews

The PT practitioner directs and structures the evaluation process, preparing a set of questions to guide two or more groups of experts through their review of the performance improvement package. Ideally the experts review the package before meeting with the PT practitioner so that they can focus on areas of concern during the meeting. The PT practitioner facilitates the meeting and records the outcomes .

2-A Two-on-One Evaluation

Two performers review the performance intervention package with the PT practitioner. The performers discuss their reactions as they move through the processes and products that compose the package.

2-B Think-aloud Protocols

This method involves only one performer at a time. The performer walks through the package and verbalizes all of his or her thoughts and reactions. The PT practitioner or evaluator prompts the performer to continue thinking aloud whenever the performer becomes silent.

2-C Computer Interviewing

Computer interviewing is the visual counterpart of telephone interviewing and a very effective use of e-mail, bulletin board, and chat room technology. The PT practitioner or evaluator can send, retrieve, analyze, and respond to the e-mail, or use a software program that automatically sends questions, collects and analyzes responses to open or closed-ended questions, and even generates and distributes a customized report.

Using bulletin board technology, the PT practitioner or evaluator can post performance improvement package products, procedures, or issues that arise during design or development. Experts or performers then go into the bulletin board area, read the postings, and react to the postings by leaving messages. Bulletin boards allow for ongoing dialogue among and between the PT practitioner or evaluator, the experts, and the performers. This technology is especially helpful during rapid prototyping (see 4-B), when analysis, design, and development of the performance intervention package are conducted simultaneously .

Chat rooms allow the PT practitioner or evaluator to conduct real-time interviews with one or more experts or performers and to print the discussion for further analysis. Chat room facilitation requires practice. When chat rooms contain more than two people, the facilitator must set up protocols to keep the interview focused and allow respondents to complete their responses before another respondent cuts in. The difficulty level rises exponentially with the number of people in the room.

3-A Evaluation Meetings

Evaluation meetings bring performers together to review and discuss the performance improvement package. Tessmer suggests that the PT practitioner or evaluator may attend the meeting or conduct a post-meeting debriefing session with a representative from the group. [37] Thiagarajan views this method as a "hands-on procedure (which) provides feedback on how well the package works in the absence of the designer." [38] Evaluation meetings may be repeated several times to validate revisions to the original package. For example, trainers may hold an evaluation meeting after a workshop to gather feedback from the participants to update the next session.

3-B; 4-A Computer Journals and Networks

In a situation where the performance intervention uses network software, the PT practitioner or evaluator may gather information from online journals. The expert or novice performer uses the software, keeps a journal to record reactions to the software, and makes suggestions for improvement. The PT practitioner or evaluator may then follow up by using some of the computer interviewing methods discussed above.

4-B Rapid Prototyping

Rapid prototyping is an alternative development evaluation process. During the development of the performance improvement package, the designer or developer works on one component at a time, and may simultaneously analyze, design, develop, and implement instead of working in a linear fashion. In the PT environment the sequence of activities may look as follows :

  1. Analyze, design, and develop one component of the performance improvement package.

  2. Develop the support products required to implement the component.

  3. Field test the component immediately with experts or performers.

  4. Revise as needed.

  5. Repeat the process until the entire performance improvement package is completed.

The formative evaluation process during rapid prototyping is similar to a pilot. However, instead of first reviewing a plan or blueprint of the performance improvement package, the users or experts review an actual working component of the package. Reviewer input is used to revise the prototype and to develop the final version.

Advantages and Disadvantages of the Alternative Methods

Table 7-5 was adapted from Tessmer to provide an overview of the advantages and disadvantages of the alternative methods of formative evaluation. Tessmer discusses each advantage and disadvantage at length in the article. [39]

Table 7-5: ADVANTAGES AND DISADVANTAGES OF ALTERNATIVE METHODS OF FORMATIVE EVALUATION

Method...

Advantages...

Disadvantages...

1-A Self-evaluation

  • easy to conduct

  • insider's viewpoint

  • not rigorously conducted

  • sometimes don't "see the forest for the trees"

1-B Panel Reviews

  • expert dialogue

  • negotiated agreement

  • may move off task

  • less independence

2-A Two-on-One

  • performer dialogue

  • performer agreement

  • possible time savings

  • no pace/time data

  • no individual opinions

  • dialogue distracting

2-B Think-aloud Protocol

  • data on mental errors

  • process data

  • intrusive

  • awkward to use

2-C Computer Interviewing

  • access to remote subjects

  • continuous evaluation

  • time-consuming

  • training required

  • equipment required

3-A Evaluation Meetings

  • amount of group info

  • quick tryout and revision

  • only easy changes made

3-B; 4-A Computer Journals and Networks

  • continuous evaluation

  • environmental variations

  • cost and time effective

  • equipment and software

  • computer experience levels of users

  • no evaluator present

4-B Rapid Prototyping

  • assess new strategies

  • assess new technologies

  • time and cost to develop

  • undisciplined design

start sidebar
Case Study: Using Formative Evaluation Throughout the Life Cycle of a Performance Improvement Package

Situation

The Detroit Medical Center (DMC) completed a systemwide rollout in 1993 that presented the philosophy and methods of continuous improvement based on the teachings of W. Edwards Deming. Every manager and employee attended vendor-designed classes describing this new approach. One class discussed how to create a positive environment that would encourage employee participation in improvement efforts. After analyzing participant questions during this class, the corporate training and development staff recognized that there were some general misunderstandings about the roles and responsibilities of the individual manager in creating a positive environment within his or her work group.

The organizational resources expended on the introduction of a continuous improvement philosophy were tremendous by any measure. It was imperative that any follow-up effort to this implementation be well-focused and targeted . DMC had to be certain the follow-up would add to management's knowledge and be helpful in leading them in a constructive direction. Success would hinge on a clean design guided by a thorough formative evaluation.

The corporate training and development staff selected an outside vendor to help them design and implement an intervention that would help to increase awareness of the need to create a positive environment and the ability to do so. The intervention also needed to be cost efficient and ready for immediate delivery as a performance support tool for the systemwide rollout.

Intervention

Working closely together, the corporate training and development staff and the vendor constructed a highly interactive in-house workshop for managers. The goals of the workshop were to more specifically describe how a manager could contribute to a positive environment and to encourage managers to develop and follow a plan called "My Blueprint for Action."

The workshop, "Strategies for Rewarding Performance," built a philosophic bridge between two competing points of view:

  1. Managers should create a positive environment using unconditional, noncontingent events such as picnics and pizza parties to foster comraderie among staff (the Deming/Alfie Kohn approach).

  2. Managers should use contingent rewards and punishments to address workplace performance and nonperformance (the Skinner approach).

The workshop designers created performance objectives based on their understanding of the disconnects people had with the rollout training. An important addition was a process model for applying rewards and punishments.

The initial design was field- tested using a group of managers who had attended the earlier rollout training. The managers participated in the new Strategies workshop, followed immediately by a one- hour debriefing session. Comments were captured by the course designers and later linked to workshop content and presentation strategies.

Results

Although the structure and objectives remained intact, many important revisions were subsequently made to the workshop based on the input of this test group. The workshop was offered during the following two years to more than 150 experienced and new managers.

Participants reported that, after attending the workshop, they had greater clarity about what the organization expected of them in terms of creating a positive environment. Some participants later became involved in process improvement activities such as leading improvement teams and facilitating new-employee orientation sessions. A nucleus of " believers " was now better prepared to act within their own work group and carry the message to others in the organization.

An annual survey ( The Management Excellence and Work Environment Survey ) tracked whether or not the DMC was creating a positive environment. This survey showed that over a three-year period the DMC consistently maintained a positive relationship with its employees . Individual managers who scored "less than desirable" within their work group were offered a confidential coaching session with a member of the training and development staff.

Beginning in 1997, the DMC also held focus groups with employees whose managers were known to have particular challenges in maintaining positive, productive environments. These groups pinpointed specific issues of employee concern and provided a mechanism for getting issues addressed. At times, it was clear that other factors beyond an individual manager's performance were negatively impacting the work environment.

Lessons Learned

Formative evaluation is an important tool in the PT toolkit. The formative evaluation conducted during the preparation phase of the new workshop helped to focus DMC's efforts. However, such a large-scope organizational issue as work environment is a moving target that requires various intervention strategies. The workshop was recently dropped as a stand-alone offering; however, some of its important teachings have been incorporated into other courses.

It would also be worthwhile to attempt predicting the expected shelf life of an intervention at the outset so that the next generation of interventions could be planned. DMC successfully clarified the preexisting confusion about a manager's role in creating a positive environment. Now DMC must continue to maintain or improve the managers' performance as well as the organizational environment itself.

This case study was written by James Naughton, MA, Detroit Medical Center. Used with permission.

end sidebar
 
Job Aid 7-2: PLANNING A FORMATIVE EVALUATION OF A PERFORMANCE IMPROVEMENT PACKAGE
start example

Directions: The columns are labeled with the first three phases of ISPI's HPT Model. (How to plan evaluation for the evaluation phase is discussed in the Meta Evaluation section.) The rows are labeled with the issues that you need to address when planning a successful formative evaluation. Start with the first phase ”Analysis ”and fill in each cell .

 

Analysis of Performance, Gap, and Cause

Selection/Design of Interventions

Implementation and Change Management

What do we want to accomplish by evaluating this phase?

     

When do we evaluate this phase?

     

What resources do we need to evaluate this phase?

     

What basic/ alternative methods will we use to evaluate this phase?

     

What data will we collect to evaluate this phase? How? Who will analyze it?

     

What type of reports do we need at the end of the evaluation? Who is our audience? What do they need to know?

     

What will it cost to evaluate this phase?

     

ISPI 2000 Permission granted for unlimited duplication for noncommercial use.

end example
 

[25] Thiagarajan, 1991, p. 22

[26] Thiagarajan, 1991, p. 22

[27] Tessmer, 1994, p. 16

[28] Geis and Smith, 1992, p. 134

[29] Thiagarajan, 1991, p. 24

[30] Moseley and Dessinger, 1998, p. 245

[31] Thiagarajan, 1991, p. 26

[32] Dick and King, 1994, p. 8

[33] Thiagarajan, 1991, p. 31

[34] Tessmer, 1994

[35] Tessmer, 1994, p. 5

[36] Tessmer, 1994; Thiagarajan, 1991

[37] Tessmer, 1994

[38] Thiagarajan, 1991, p. 30

[39] Tessmer, 1994, p. 6




Fundamentals of Performance Technology. A Guide to Improving People, Process, and Performance
Fundamentals of Performance Technology: A Guide to Improving People, Process, and Performance
ISBN: 1890289086
EAN: 2147483647
Year: 2004
Pages: 98

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net