Multiple-choice exams are often the least preferable of exam formats, because it can be really challenging to uncover exactly what students now understand and what they still struggle with. At the same time, for those of us teaching large enrollment science classes, we often feel forced to use easy-to-grade multiple-choice exams just because of the sheer size of large courses. Even a single short-answer essay question that takes only 3 minutes each to grade translates into 20 hours of grading when you have four hundred students. For better or worse, easy-to-grade multiple-choice style exams are stable part of most of our lives. And, if you tend to do a lot of classroom polling and voting (perhaps using clickers), having a library of easy-to-use multiple-choice items to use during class is critical. What, if anything, can a busy professor do to increase the cognitive level of multiple-choice items?
One strategy is to reformat visual graphics to have (a), (b), (c), (d), and (e) answers. Imagine saving a copy of a labeled diagram from your textbook into a PowerPoint slide (or whatever your favorite presentation software is that has some drawing capabilities) and pasting white text boxes over the labels. If you use text boxes with letters and a list of terms, this can become a multiple-choice item.
For the diagram above, the appropriate numbered labels are:
- (a) Castor, (b) Pollux, (c) Capella
- (a) Castor, (b) Pollux, (c) Capella
- (a) Castor, (b) Pollux, (c) Capella
- (a) Betelgeuse, (b) Bellatrix, (c) Sirius, (d) Procyon
- (a) Betelgeuse, (b) Bellatrix, (c) Sirius, (d) Procyon
- (a) Betelgeuse, (b) Bellatrix, (c) Sirius, (d) Procyon
- (a) Betelgeuse, (b) Bellatrix, (c) Sirius, (d) Procyon
- (a) Betelgeuse, (b) Bellatrix, (c) Sirius, (d) Procyon
Another strategy is to take advantage of the higher cognitive levels required to do a sorting task, such as completing a traditional Venn Diagram. But, instead of asking students to write words into an overlapping circle diagram, one could ask students to classify concepts into category A, category B, or both.
Select (a) for category A: Weather; (b) for category B: Climate; or (c) if it is part of both. | |
1. Air Temperature on Tuesday | (a) (b) (c) |
2. Humid Subtropical | (a) (b) (c) |
3. Partly Cloudy | (a) (b) (c) |
4. Arid Desert | (a) (b) (c) |
5. Windy, chance of rain | (a) (b) (c) |
6. Polar Arctic | (a) (b) (c) |
7. 72° | (a) (b) (c) |
8. 1013 mb (29 inHg) & falling | (a) (b) (c) |
9. Foggy, with low visibility | (a) (b) (c) |
10. Predominantly westerly winds | (a) (b) (c) |
11. Icy | (a) (b) (c) |
12. Glacier melting causes sea levels to rise | (a) (b) (c) |
Among many others out there, a third strategy is to use mini-debates. A mini-debate is presented as two possible answers to a question, and students need to select which answer they most agree with. This represents high levels of cognitive engagement because students are tasked to commit to a judgment on whether or not they agree or disagree with two, clearly stated hypothetical statements. We say highly-structured here because widely unconstrained tasks like “splitting the class in two halves and each has to debate the two sides of whether or not Pluto is a planet or if Andromeda is within or beyond our galaxy” rarely work as well as initially envisioned. In contrast, a mini-debate provides students with the precise language that they can use for discussion. And, fortunately, takes far less class time.
Experience suggests that modifying existing multiple-choice questions from old test-banks often provide excellent starter-material for rapidly developing new student mini-debates. The strategy here is to first pull out the correct response and the most common incorrect response from an existing multiple-choice test item. Then, take these two choices and reword them into a student debate using more casual but increasingly complex natural student language.
Finally, one of the biggest challenges when creating multiple choice tests are that there is just so much reading to be done on the part of the student. This is particularly problematic for students whose native language is not English. Sometimes, we inadvertently create an exam that more accurately is testing a student’s reading ability than their actual scientific understanding. To get around this, one strategy is to use essentially the same question over and over again, but only changing one aspect of the questions—and clearly marking which aspect is changing from question to question.
Increasing the number of questions can be inadvertently too taxing on the students if questions are too dissimilar. Professors who use numerous voting questions usually only change one OBVIOUS part of each question, thereby giving students more practice and do so without much student-to-student discussion.
Examples of Rapid Voting Question Sequence | ||
At sunset, a FIRST-quarter moon is visible in the | At sunset, a FULL moon is visible in the | At sunset, a THIRD-quarter moon is visible in the |
1(A): West | 1(A): West | 1(A): West |
2(B): South | 2(B): South | 2(B): South |
3(C): East | 3(C): East | 3(C): East |
4(D): not visible | 4(D): not visible | 4(D): not visible |
A similar approach is to use calculator-free mathematical reasoning tasks. Questions like these ask students to quickly judge or rank magnitudes emphasizing the quantitative nature of astronomy.
Examples of Rapid Quantitative-Reasoning Questions | ||
Which is largest? | Which is farthest? | Which planet is farthest? |
1(A): The Sun | 1(A): The Sun | 1(A): The Sun |
2(B): The Moon | 2(B): The Moon | 2(B): The Moon |
3(C): Jupiter | 3(C): Jupiter | 3(C): Jupiter |
4(D): Pluto | 4(D): Pluto | 4(D): Pluto |
The think-pair-share, classroom poling, voting teaching strategy movement has helped professors create innovative ways for students to efficiently “vote” during polls. It makes practical sense that we should be using these same strategies to pose more intellectually challenging questions on our multiple-choice exams. Moreover, if our in class learning tasks look much like the same format as exams, then we have a better chance of more fairly and more accurately measuring student learning gains on our exams.
Tim Slater, University of Wyoming, Tim@CAPERteam.com
Recent Comments