Summative Evaluation


Summative Evaluation

Summative evaluation is the next logical step in the evaluation process and helps the organization determine whether or not to put the "soup" on its performance improvement "menu." "When the cook tastes the soup, that's formative ; when the guests taste the soup, that's summative." [40] During formative evaluation, the "cooks" who prepare the performance intervention package (analysts, designers, developers) both conduct and benefit from the evaluation. On the other hand, an external evaluator frequently stirs the pot during summative evaluation and the "guests" are organizational decisionmakers and stakeholders.

Summative evaluation is the most objective way to document the strengths and weaknesses of a performance intervention package. It also "provides a public statement for use with clients , other possible consumers, funding agencies, government groups and others." [41] Yet summative evaluation is frequently omitted because it requires time, money, skilled resources, and the commitment and support of senior management.

Definition and Scope

A functional definition states that summative evaluation "involves gathering information on adequacy and using this information to make decisions about utilization." [42] Basically, summative evaluation looks at the results of a performance intervention package and gathers information that will be useful to the senior decisionmakers in the organization.

Purpose

Summative evaluation seeks to answer two major questions:

  1. Did the performance intervention package solve, eliminate, or reduce the original performance problem or gap?

  2. Does the performance improvement package meet the needs of the organization? [43]

During the summative evaluation phase, the PT practitioner or the evaluator gathers information on the following:

  • Reactions ”What is the reaction of the performers? Their peers? Their managers? The customers? The suppliers? The decision makers ?

  • Learning and capability ”What was the level of learning and/or capability before the intervention? After the intervention?

  • Accomplishments ”Are the performers exhibiting a higher level of performance in their jobs?

  • Results ”What is the impact on the performance gap? On the bottomline? [44]

Conducting a Summative Evaluation

Summative evaluation is usually conducted during implementation and change management. In fact, Smith and Brandenburg refer to summative evaluation as "rear end analysis," as opposed to front-end analysis (performance analysis). [45]

Collecting and analyzing data is more formalized during summative evaluation than it is during formative evaluation. Summative evaluation may use some of the same tools as formative evaluation ”interviews, observation, group processes, and surveys. However, unlike formative evaluation, summative evaluation relies largely on testing and measurement strategies and statistical analysis to evaluate the results of a performance intervention package and to provide an objective basis for decisionmaking. Frequently an external, expert evaluator conducts the summative evaluation.

There are eight basic steps to follow when planning a summative evaluation. [46] The steps are:

  1. Identify the decisionmaker and the stakeholders and conduct interviews to specify what decision needs to be made.

  2. Translate decision into research (evaluation) questions and ask the decisionmaker to review the questions.

  3. Outline the design of the evaluation using three categories: strategies, standards, and participants or population.

  4. Conduct a reality check; analyze constraints, resources, and opportunities to determine what is practical and possible given the existing situation.

  5. Specify instruments, procedures, and sampling strategies that will collect the required data.

  6. Conduct another reality check; outline the data analysis plan and make sure the data collection instruments really provide the answer to the evaluation questions and that the data are easy to tabulate.

  7. Specify administration requirements for staffing, scheduling, budgeting, and reporting.

  8. Document and communicate the evaluation process and the results by preparing and distributing a design document or blueprint, status or interim reports , and a final report.

The job aid at the end of this section provides a guideline for completing these eight steps. For additional job aids and guidelines read Herman, Morris, and Fitz-Gibbons. [47]

start sidebar
Case Study: Summative Evaluation Plan for a Performance Improvement Program

Situation

A major automotive company launched a performance improvement package for specialty vehicle dealership sales staff across the U.S. The package included a two-day road show that traveled to regional locations across the country. The road show team was composed of a product information facilitator, sales skills facilitator, two business theater facilitators (BTFs) to conduct role plays, the training designer, and a technician. The technician was there to set up, run and troubleshoot an electronic presentation system (EPS) and an audience response system (ARS). EPS made it possible to project video, slides, and MS Powerpoint presentations.

The following discussion contains an overview of the evaluation plan for the road show. The complete plan was sent to the decisionmaker and the stakeholders at division headquarters.

Evaluation Plan

The goal of the road show was to provide dealership salespeople and key personnel with the techniques and tools they needed to make the division's newly articulated vision a reality. The purpose of the evaluation plan was to help the division measure how well they met the goal,

The plan followed the four traditional levels for evaluating the results of training. [48] The following excerpt from the dealership evaluation plan lists the focus or goal of each level of evaluation (evaluation questions) and explains when (timing) and how (strategy) the team planned to evaluate participant reactions and accomplishments and the impact of the program on the dealerships and the division.

The evaluator provided the decisionmaker and stakeholders with weekly evaluation reports while the road show was touring the U.S. The reports included the ARS results of the Level 1 and 2 evaluations and copies of the open -ended Level I evaluations filled out by the participants.

The evaluation team also provided the decisionmaker and stakeholders with a biweekly report of Level 3 evaluation activities. A biweekly Level 3 report was also sent to the regional managers with the names and dealerships of participants from their region who had activated their action plans and received certificates of completion.

Evaluation Report

The ARS survey measured participant reaction to the following components of the training program: facilitators, training aids, learning aids, and participant materials. The participants also indicated to what degree they felt they would be able to apply what they had learned when they returned to their dealerships.

All the questions on the ARS survey were answered on a scale of 1-6 as follows :

1

=

very strongly disagree

2

=

trongly disagree

3

=

disagree

4

=

agree

5

=

strongly agree

6

=

very strongly agree

The average response to the training components was 5.0. A review of the Level I evaluation data from 230 participants at II sites indicated that 98 percent or approximately 225 of the 230 participants agreed, strongly agreed, or very strongly agreed that:

  • The visual components helped them learn and increased their interest level.

  • The participant materials helped them to learn,

  • The examples and samples helped them to learn.

  • The product and sales facilitators were well-prepared, organized, easy to understand, knowledgeable, and effective at interacting with the participants.

  • The BTFs were prepared, easy to understand, and effective at interacting with the participants.

  • The participants will be able to apply what they have learned when they return to their dealerships.

Feedback from the approximately 100 participants who have completed their action plans and received their plaques also helps confirm that the training resources in Phase I were effective and that the training transferred on the job.

Exact data on the knowledge or skill accomplishments of the participants are not available at this time; however, there was a positive increase in the knowledge and skill levels.

Level 3 and Level 4 evaluation is incomplete, but the program did continue through several phases and later evolved into a corporate university format.

Lessons Learned

  1. The ARS surveys and quizzes were a highlight of the program and a good incentive to participate in the evaluation components of the workshop. The participants enjoyed using their "phasars" (keypads) and rated the ARS system highly on their Level I evaluation. The ARS system also made it easy to gather individual and group Level 1 and 2 evaluation data and to report the results quickly, clearly, and accurately. In addition, the system made it possible to give immediate feedback to the participants by projecting histograms of the reactions and knowledge test responses made by the entire group.

  2. Prizes (special road show mugs, pen holders, caps, etc.) provided incentive for learning and achieving. In addition to offering prizes to teams and individuals who excelled during the training sessions, each participant who successfully implemented a personal action plan received a Selling the Vision certificate.

  3. Using professional actors (BTFs) for the role playing put the participants at ease and encouraged them to take a more active role in the performance testing activities. For example, when Dan Seller (BTF) was not listening to the customer during the listening skills module role play, the participants eagerly coached him and even offered to demonstrate active listening skills.

  4. Sometimes good news is bad news. Program stakeholders discontinued the use of Level I evaluation because of the consistent positive feedback. In addition, the strongly positive reaction to the initial program caused stakeholders to withdraw support from the implementation of a full Level 1-4 evaluation plan and to shift the organization's resources into design, development, and implementation of new training programs that would build on the skills developed during the road show.

This case study was written by Joan Conwoy Dessinger, Ed.D., The Lake Group. Used with permission.

end sidebar
 

Dealership Evaluation Plan Overview

Level 1-Reaction

  • Evaluation Question

  • Did the participants like the training?

  • Timing and Strategy

  • At the end of each training day the facilitator will use the Audience Response System (ARS) to gather participant reaction to the workshop content, presentation, instructional aids, and their self- reported ability to use the new learnings or skills. The facilitator will also collect written comments from the participants. The ARS system will collect, save, and summarize the reactions.

  • Because the Road Show is a work in progress, the designer will use the information to revise the program as needed.

Level 2-Learning

  • Evaluation Question

  • Did the participants learn the information/acquire the skill presented in the training?

  • Timing and Strategy

  • At the beginning of Day One the facilitator will use the ARS system to pretest the participant's product knowledge.

  • At the end of each module, the facilitator will use performance tests with planned observation (tied to role-play scenarios) and include application of product knowledge when appropriate.

  • At the end of Day Two the facilitator will use the ARS system to post-test the participants' product knowledge.

Level 3-Behavior

  • Evaluation Question

  • Can the participants apply what they have learned to create business as "unusual"?

  • Timing and Strategy

  • At the end of each module, participants will complete an action plan describing how they will use the new knowledge or skill.

  • The evaluator will follow up in the dealerships within 30 “60 days by interviewing participants, sales managers, coworkers, customers, and dealer principals.

Level 4-Impact

  • Evaluation Question

  • Did the road show have a positive impact on the dealerships and division?

  • Timing and Strategy

  • Six months after the training the evaluator will interview participants, sales managers, coworkers, customers, dealer principals, and division stakeholders.

  • The evaluator will also perform a before-and-after comparison of sales figures and customer service documentation.

Job Aid 7-3: GUIDELINES FOR PLANNING A SUMMATIVE EVALUATION
start example

Steps To Take

Guidelines

  1. Specify the decision

Interview decisionmaker and stakeholders to determine:

  1. Who is the real decisionmaker?

  2. Who are the real stakeholders?

  3. What is the real decision that needs to be made?

  4. What data does the decisionmaker require to make the decision?

  5. What criteria will the decisionmaker use to make the decision?

  6. Are there any constraints on the evaluation (see Step 4)?

  1. Translate the decision into evaluation questions

Each evaluation question should contain the following:

  1. What is being measured?

  2. Who is being measured?

  3. How (standard or criteria) will the data be measured?

Example: Do managers (who) change (standard) their approach to self-development as a result of a new process (what)? (Data collection will provide the level of self-development before implementation of a new process.)

  1. Outline the evaluation design

The outline should include the following information:

  1. What strategies will be used to evaluate changes in performance after the performance intervention is implemented?

  2. What standards will be used to determine the value of the change in performance?

  3. Who will participate in the evaluation?

  1. Analyze constraints, resources, and opportunities

Determine the resources required to conduct the evaluation, taking into account the five major constraints:

  1. Time

  2. Staff

  3. Access to data sources

  4. Political considerations

  5. Budget

Try to turn constraints into opportunities... Example: If the old program continues to run while the new program is being implemented, take advantage of the chance to compare the outcomes .

  1. Specify instruments, procedures, and sampling

Given the evaluation questions, participants, and organizational climate...

  1. What data collection procedures are most effective and efficient?

    • Analysis of existing records

    • Interviews

    • Surveys or questionnaires

    • Observation

    • Group processes

    • Tests

    • Other...

  2. Which instrument(s) will generate the required information?

    • Standardized or new tests

    • Standardized or new surveys or questionnaires

    • Observation checklists

    • Interview checklists

    • Group process questions

    • Other...

  3. What sampling decisions need to be made?

    • Selection: random, stratified random (random by group), or whole group

    • Sample size (the bigger the impact the smaller the sample size required to demonstrate the impact)

  1. Outline data analysis

Check the logic of the data collection procedures by asking the following questions:

  1. What statistics will be compiled?

  2. How will the statistics be used?

  3. How will the data be summarized?

  1. Specify administrative requirements

Administrative requirements include the following:

  1. Who will manage the evaluation?

  2. How will the schedule be developed? Approved? Maintained?

  3. How will the budget be developed? Approved? Maintained?

  4. How will data collection be monitored ?

  5. How will communication (status and final report)) be handled?

  1. Document and communicate the process and the results.

The following documents will help document the progress of the evaluation:

  1. Evaluation Plan or Design Document (blueprint for Steps 1-7)

  2. Status Reports (if required by stakeholders or decision maker)

  3. Final Report (contains overview of evaluation process, text and graphic report of results, conclusions, and recommendations for action)

Distribute the reports to the decision maker, stakeholders, and participants as determined in the Evaluation Plan.

This job aid is based on the work of Smith and Brandenburg. [49]

ISPI 2000 Permission granted for unlimited duplication for noncommercial use.

end example
 

[40] Seels and Richey, 1994, p. 58

[41] Geis, 1986, p. 11

[42] Seels and Richey, 1994, p. 57

[43] Smith and Brandenburg, 1991, p. 35

[44] Rosenberg, 1996b, p. 9

[45] Smith and Brandenburg, 1991, p. 35

[46] Smith and Brandenburg, 1991

[47] Herman, Morris, and Fitz-Gibbons, 1987

[48] Kirkpatrick, 1994

[49] Smith and Brandenburg, 1991, pp. 36 “42




Fundamentals of Performance Technology. A Guide to Improving People, Process, and Performance
Fundamentals of Performance Technology: A Guide to Improving People, Process, and Performance
ISBN: 1890289086
EAN: 2147483647
Year: 2004
Pages: 98

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net