Hack 28. Ask the Right Questions

If you are a classroom teacher, a job interviewer, or in any situation where you want to measure someone's understanding, you have a variety of ways to ask a question. Here are some tools from the science of measurement that allow you to ask the right question in the right way.

For more than a hundred years, classrooms have been an environment of questions and answers. Outside of school, tests are more and more common in the workplace and in hiring decisions. Even in my free time, I can't pick up a Cosmo without having to respond to a relationship quiz about whether I am "friendly" or "frosty" when it comes to meeting people at parties. (I'm frosty. Want to make something of it?)

Many professions have to ask good questions or write good tests:

  • Teachers ask students questions while lecturing or one-on-one in private conferences to assess student understanding.

  • Trainers write questions to evaluate the effectiveness of workshops.

  • Personnel officers develop standard questions to measure applicants' skills.

Anyone who ever has to assess how much someone else knows is faced with the dilemma of deciding what sort of question to ask to really get to the heart of the matter. This hack provides solutions to the two most common problems when writing tests or designing questions meant to measure knowledge or understanding:

  • How do I construct a good question?

  • What should I ask about?

Constructing a Good Question

For measuring knowledge quickly and efficiently, it is hard to beat the multiple-choice item as a question format.

Multiple-choice questions are a type of item that presents respondents with a question or instruction (called the stem), and then asks them to select the correct answer or response from a list of answer options. These types of items are sometimes referred to as selection items because people select the answer.

To give us the right terms to use as we talk about how to write a good multiple-choice item, a quick primer is in order.

Here is an example of a multiple-choice item:

Who wrote The Great Gatsby? Stem
A. Faulkner Distractor
B. Fitzgerald Correct answer ("keyed" answer)
C. Hemingway Distractor
D. Steinbeck Distractor

As you see, each part of the question has a name. The correct answer is called the correct answer (how's that for scientific jargon?), and wrong answers are called distractors.

Not much, but some real-world research has been done on the characteristics of multiple-choice items and how to write good ones. To write good multiple-choice items, follow the following critical item-writing guidelines from this research:

Include 3 to 5 answer options

Items should have enough answer options that pure guessing is difficult, but not so many that the distractors are not plausible or the item takes too long to complete.

Do not include "All of the Above" as an answer option

Some people will guess this answer option frequently, as part of a test-taking strategy. Others will avoid it as part of a test-taking strategy. Either way, it does not operate fairly as a distractor. Additionally, to evaluate the possibility that "All of the Above" is correct requires analytical abilities that vary across respondents. Measuring this particular analytic ability is likely not the targeted goal of the test.

Do not include "None of the Above" as an answer option

This guideline exists for the same reasons as the previous guideline. Additionally, for some reason, teachers do tend to create items where "None of the Above" is most likely to be the correct answer, and some students know this.

Make all answer options plausible

If an answer option is clearly not correct because it does not seem related to the other answer options, it is from a content area not covered by the test, or the teacher is obviously including it for humorous reasons, it does not operate as a distractor. Students are not considering the distractor, so a four-answer-option question is really a three-answer-option question and guessing becomes easier.

Order answer options logically or randomly

Some teachers develop a tendency to write items where a certain answer option (e.g., B or C) is correct. Students might pick up on this with a given teacher. Additionally, some courses on doing well on standardized multiple-choice tests suggest this technique as part of a test-taking strategy. Teachers can control for any tendencies of their own by placing the answer options in an order based on some rule (e.g., shortest to longest, alphabetical, chronological).

Another solution to this ordering problem is for teachers to scroll through the first draft of the test on their word processors and attempt to randomize the order of answer options. Computerized randomization is the solution, of course, for commercial standardized test developers as well.

Make the stem longer than answer options

An item is processed more quickly if the bulk of the reading is in the stem, followed by brief answer options.

Because longer stems followed by shorter answer options allows for easier processing for test takers, a good multiple-choice item should look like this:






Do not use negative wording

Some students read more carefully or process words more accurately than others, and the word "not" can easily be missed. Even if the word is emphasized so no one can miss it, educational content tends not to be learned as a collection of non-facts or false statements, but is likely stored as a collection of positively worded truths.

Make answer options grammatically consistent with stem

For example, if the grammar used in the stem makes it clear that the right answer is a female or is plural, make sure that all answer options are female or plural.

Use complete sentences for stems

If a stem is a complete question ending with a question mark, or a complete instruction ending with a period, students can begin to identify the answer before examining answer options. Students must work harder if stems end with a blank or a colon, or if it's simply an uncompleted sentence. More processing increases chances of errors.

Asking a Question at the Right Level

Identifying the right level of question to ask is the second major problem that must be overcome when creating tests. Some questions are easy; they only assess one's ability to recall information and indicate a fairly low level of knowledge. Other questions are more difficult and require a response that combines existing knowledge or applies it to a new problem or situation. Because different levels of questions measure different levels of understanding, the right question must be asked at the right level for anything useful to be gained from the enterprise.

A smart fellow and educational researcher, Benjamin Bloom, writing in the 1950s, suggested a way of thinking about questions and the level of understanding required to respond correctly. His classification system has become known as Bloom's Taxonomy, a classification system of educational objectives based on the level of understanding necessary for achievement or mastery. Bloom and colleagues have suggested six different cognitive stages in learning. They are, in order from lowest to highest:

1. Knowledge

Ability to recall words, facts, and concepts

2. Comprehension

Ability to understand and communicate about a topic

3. Application

Ability to use generalized knowledge to solve an unfamiliar problem

4. Analysis

Ability to break an idea into parts and understand their relationship

5. Synthesis

Ability to create a new pattern or idea out of existing knowledge

6. Evaluation

Ability to make informed judgments about the value of new ideas

Choosing the right cognitive level

Let's use teachers as an example of how to think about what level of questions you want. Teachers choose the appropriate cognitive level for classroom objectives, and a quality assessment is designed to measure how well those objectives have been met. Most items written by teachers, and those on prewritten tests packaged with textbooks and teaching kits, are at the knowledge level. Most researchers consider this unfortunate, because classroom objectives should be (and usually are) at higher cognitive levels than simply memorizing information.

When new material is being introduced, however (at any agepreschool through advanced professional training), an assessment probably should include at least a check that basic new facts have been learned. When teachers decide to measure beyond the knowledge level, the appropriate level for items depends on the developmental level of students. The cognitive level of students, particularly their ability to think and understand abstractly, and their ability to solve problems using multiple steps, should determine the best level for classroom objectives, and, therefore, the best level for test items. Researchers believe that teachers should test over what they teach, in the same way that they teach it.

So, any time you find yourself wanting to assess the knowledge hidden inside someone's head, think about what level of understanding you want to assess. Is basic memorized knowledge enough? If so, then the knowledge level is the appropriate level for a question. Do you want to know whether your job applicant can use her knowledge to solve problems she has never experienced before? Ask a question at the application level, and she will have to demonstrate that ability.

Designing questions at different cognitive levels

Follow the guidelines in Table 3-5 for creating items or tasks at each level of Bloom's Taxonomy.

Table Questions at different cognitive levels
Bloom's levelQuestion characteristicsExample question or task
KnowledgeRequires only rote memory ability and such skills as recall, recognition, and repeating backWho wrote The Great Gatsby?
A. Faulkner
B. Fitzgerald
C. Hemingway
D. Steinbeck
ComprehensionRequires skills such as paraphrasing, summarizing, and explainingWhat is a prehensile tail?
ApplicationRequires skills such as performing operations and solving problems, and includes words such as use, compute, and produce If a farmer owns 40 acres of land and buys 16 acres more, how many acres of land does she own?
AnalysisRequires skills such as outlining, listening, logic, and observation, and uses words such as identify and break down Draw a map of your neighborhood and identify each home.
SynthesisRequires skills such as organization and design, and includes words such as compare and contrast Based on your understanding of the characters, describe what might happen in a sequel to Flowers for Algernon.
EvaluationRequires skills such as criticism and forming opinions, and includes words such as support and explain Which musical film performer was probably the best athlete? Defend your answer.

When to use Bloom's Taxonomy

There is an implied hierarchy to Bloom's categories, with knowledge representing the simplest level of cognition and evaluation representing the highest and most complex level. Anyone writing questions to assess knowledge can write items for any given level. Teachers can identify the level of chosen classroom objectives and create assessments to match those levels. With objectively scored item formats, it is fairly simple to tap lower levels of Bloom's taxonomy and more difficult, but not impossible, to measure at higher levels.

You should not worry too much about the fine distinctions between the six levels as defined by Bloom. For example, comprehension and application are commonly treated as synonymous, as it is the ability to apply what is learned that indicates comprehension. Most testing theorists and classroom teachers today pay the most attention to the distinction between the knowledge level and all the rest of the levels. Most teachers, except at introductory stages of brand new areas, prefer to teach and measure to objectives that are above the knowledge level.

See Also

  • Here's something a little more scholarly that I wrote with some colleagues: Frey, B.B., Petersen, S.E., Edwards, L.M., Pedrotti, J.T., and Peyton, V. (2005). "Item-writing rules: Collective wisdom." Teaching and Teacher Education, 21, 357-364.

  • For a good review of item-writing rules, check out Haladyna, T.M., Downing, S.M., and Rodriguez, M.C. (2002). "A review of multiple-choice item-writing guidelines for classroom assessment." Applied Measurement in Education, 15(3), 309-334.

  • The influential ideas in Bloom's taxonomy were introducd in Bloom, B.S. (Ed.). (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1. Cognitive domain. New York: McKay.

  • Bloom, B.S., Hastings, J.T., and Madaus, G.F. (1971). Handbook on formative and summative evaluation of student learning. New York: McGraw-Hill.

  • Phye, G.D. (1997). Handbook of classroom assessment: Learning, adjustment, and achievement. San Diego, CA: Academic Press.

Statistics Hacks
Statistics Hacks: Tips & Tools for Measuring the World and Beating the Odds
ISBN: 0596101643
EAN: 2147483647
Year: 2004
Pages: 114
Authors: Bruce Frey

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net