Map Work-Flow Processes
With the work processes and roles defined, we can flow the curriculum design and development value stream maps, which chart in detail the method our team will follow to create or purchase content (see Figure 4-6). The work-flow process should include roles and responsibilities, action items, time to complete each task, and metrics by which the success of each step will be measured. It is an overarching map of the entire course-development process, from the kick-off meeting to delivery of the product.
Figure 4-6: Curriculum Identification Work-Flow Process.
Focus on Manufacturing and Engineering
We also decided at this point that while our initiative addressed the needs of the entire organization, our team had to focus particularly on providing viable courses and learning opportunities for two specific groups. Early on, the learning and development team defined success as a function of priorities. Because 80 percent of our population consisted of manufacturing and engineering, we consciously decided to direct the bulk of our efforts to these two populations. This didn't mean that the other 20 percent wouldn't get trained ”they certainly did. It meant that whenever there were conflicting needs, engineering and manufacturing always received priority. All of our initial training efforts would be directed at their training needs to guarantee that the efforts our team made were tied to the most critical functions of the organization.
Create Training Resource Rooms
Even with the appropriate technology in place, there was a cultural issue that threatened to stand in the way of a successful transformation. Because of our research, we were aware of an overriding cynicism on the part of managers toward non “job-specific tasks . Managers expected employees to be working on billable projects when they were at their desks. Because for more than fifty years all training at Rockwell Collins had always been done in a classroom, any attempt to implement learning at the desktop was sure to run into this cultural barrier in a big way. Employees who were online, even if it was for training, would be thought of as slacking off. This told us that many employees would never be given the time or support to take e-learning at their desktops.
To combat this attitude, we budgeted to build nineteen training resource rooms in key Rockwell Collins locations. They would act as safe havens for employees to get training close to their work sites but far enough away from managers to be free from scrutiny.
Several training rooms already existed, but no one ever used them. They had computers but no training software, few soundcards, and no courses for employees to take. Our team would expand the technology, update the software, and add a library of off-the-shelf selfpaced training courses that would be housed on the company intranet for use by employees.
Test Training in a Controlled Lab
Before rolling out any hardware or software, our team tested it in a controlled testing facility. We planned to build a computer room with several computers that would act as a model for the Rockwell Collins environment. It would replicate our technology, browsers, and computer configurations so that our team could troubleshoot any course or software before end users had access to it.
We also planned to test our training in a controlled lab to verify its effectiveness, without involving the confusion of outside influences. While it's challenging to isolate training's impact, clear, unpolluted success statistics are a powerful tool for winning the hearts and minds of leadership. We've found approaching measurement from the front end makes it possible to isolate a single successful program as an example of your overall efforts. One set of measurements can be all you need.
Early on in Butler's career, he discovered the power of a single batch of unadulterated training statistics. While developing training for the sales team at a mortgage loan company that was struggling with the efficacy of its training program, he suggested conducting a year-long comparison study. The company divided the 400 salespeople in half, making sure each group included all levels of performers. Half of the team went through the old training program and half went through Butler's new program. Then they tracked their productivity. Butler found substantial increases in the second team's performance, increases that continued over the five years the team was measured. He also found that the middle performers showed the most significant improvements after completing the training. Based on those statistics, senior management agreed to overhaul the entire training program for the company.
At most companies, the value of training has never been substantiated to this degree because measurement is approached after the fact, when outside influences make it impossible to verify training's impact. Approaching measurement in the beginning, however, makes it possible to compile the level of compelling results that Butler found at the mortgage company.
Using that model, we planned to do measurement from the front end of a single set of courses and we were able to create a scenario that directly compared the old way with the new and produced hard results.
We planned to produce a new computer-based version of a classroom course, put test groups through both formats, then test and compare each group's ability to perform the related tasks .
The question was which courses to measure. Initially it seemed prudent to measure the high-end custom courses. They require the most investment, so they should produce the most value. However, proof that your high-end custom courses work shows only that your best stuff is valid. We measured our lowest common denominators, the courses that follow the loosest standards and take the least amount of money and time to produce. If you prove that your cheapest, least complex courses have value, it becomes reasonable to assume that every course you invest in that follows your established set of guidelines and development processes will be even more effective.