Blooming Assessments to Evaluate Course Reform Effectiveness

Return to search results | New search

Title of Abstract: Blooming Assessments to Evaluate Course Reform Effectiveness

Name of Author: Janet Casagrand
Author Company or Institution: University of Colorado, Boulder
PULSE Fellow: No
Applicable Courses: All Biological Sciences Courses
Course Levels: Across the Curriculum
Approaches: Assessment, Material Development
Keywords: Bloom's taxonomy assessment course reform student learning undergraduate

Name, Title, and Institution of Author(s): Katharine Semsar, University of Colorado, Boulder

Goals and intended outcomes of the project or effort, in the context of the Vision and Change report and recommendations: I (JC) have been teaching a large (~100 student), upper division neurophysiology course since 2003. My goal is for students to gain a higher level understanding of the material, not simply memorize facts. When I began teaching the course, I could not challenge students’ understanding much on exams, despite spending considerable lecture time on some concepts. I began introducing evidence-based course reforms (e.g., in-class clicker questions, concept-based homework, and homework help room) with the goal of improving learning and understanding, without altering the course content. My impression was these changes did improve learning, but I wanted to quantify whether this was indeed true, and justify the additional time and effort. As is typical for most course reform, changes took place incrementally over several years, and no measures of student learning other than course exams were in place before changes began. Furthermore, the course exams changed significantly. Thus, a difficulty I encountered was how to assess learning in the absence of a concept inventory or matched exams.

Describe the methods and strategies that you are using: Bloom’s taxonomy for the cognitive domain is a widely accepted tool for delineating types of thinking into six different levels of understanding, with the first 2 levels generally considered to represent lower levels of understanding, and the other 4 higher-order levels involving critical thinking. Bloom’s taxonomy offers a means for informing course design and development of assessments at appropriate cognitive levels; categorizing the cognitive processing levels targeted by learning objectives, instructional methods, or assessments; assessing curricular alignment; and demonstrating question equivalence. We sought to use it to indirectly measure the effect of course reform on student learning by determining the cognitive level at which students can perform.

Describe the evaluation methods that you used (or intended to use) to determine whether the project or effort achieved the desired goals and outcomes: While training raters with Bloom’s taxonomy, we developed and tested a new tool, the Bloom’s Dichotomous Key (BDK) to measure the cognitive level of exam questions and other course materials. The key consists of a series of questions about what students need to do to answer an exam question. We also investigated sources of inter- and intra-rater variations in the evaluation of cognitive levels of course materials, and the BDK designed to reduce those variations by having raters answer a series of question using consistent and salient criteria. This improves categorization consistency among multiple raters, and is easier and faster to use than more traditional rubrics. Thus, the BDK may be of special interest as a training tool for those who are new to Bloom’s taxonomy. We then demonstrated that the BDK is useful in measuring changes in student learning over a four-year evidence-based course reform effort. In particular, we were able to show that students in the neurophysiology course were performing equally well on more challenging exams that focused on higher-level cognitive skills and aligned well with other course materials. Post-reform exams had significantly more questions at the higher Bloom’s levels (p<0.001, df=5, n=83,84).

Impacts of project or effort on students, fellow faculty, department or institution. If no time to have an impact, anticipated impacts: The Bloom’s Dichotomous Key is a new tool which allowed raters to more quickly and accurately determine the cognitive level of exam questions. Furthermore, using the Bloom’s Dichotomous Key we were able to quantify changes in student learning when pre-reform learning assessments were not available and exam questions were made more difficult (while student performance remained similar). Other faculty may find it useful for similar purposes. Furthermore, use of the Bloom’s Dichotomous Key improves categorization consistency among multiple raters, and is easier and faster to use than more traditional rubrics. Thus, it may be of special interest as a training tool for those who are new to Bloom’s taxonomy.

Describe any unexpected challenges you encountered and your methods for dealing with them: We initially recruited 3 raters familiar with the course content to evaluate the Bloom’s level of course exams. We began by providing raters with an overview of Bloom’s taxonomy, associated terms, and sample questions in a rubric similar to the Blooming Biology Tool (BBT) (Crowe et al., 2008). Using this type of rubric, we had raters practice categorizing 26 sample neurophysiology questions. We observed that the mean average deviation score (degree to which a rater deviates from average rating) was 0.7 with a standard deviation of 0.3. Furthermore, although at least two raters agreed on a categorization for 88% of the questions, the percentage of questions for which all three raters agreed was low, only 19%. Discussions with raters suggested discrepancies in their associated reasoning processes in assigning ratings to the questions. To better discern how raters were using the rubric to make decisions, we performed think-aloud interviews in which each rater verbalized his/her thought processes as they used the rubric to categorize the sample exam questions. Based on these observations and to address some of these issues, we began by identifying the common elements of what students need to do in order to answer a question (e.g., interpret data). We then used this information to generate a Bloom’s dichotomous key (BDK). The BDK was organized like a flow chart, so raters answered a series of ‘yes-or-no’ questions that guided them through common elements of questions, and ultimately to a Bloom’s level for the question being categorized. Compared with the more conventional rubric, raters were more consistent when using the BDK. The mean average deviation score dropped to 0.48, and the percentage of questions for which all three raters agreed doubled using the BDK as compared to the original rubric.

Describe your completed dissemination activities and your plans for continuing dissemination: The Bloom’s Dichotomous Key has been presented and distributed at a Human Anatomy and Physiology Society workshop, and is currently being submitted for publication.

Acknowledgements: We thank Francoise Benay-Bentley, Dr. Teresa Foley, Jeffrey Gould, Dr. Dale Mood, and Dr. Carl Wieman for their assistance with this project. Financial support was provided by the President’s Teaching and Learning Collaborative of the University of Colorado, and the University of Colorado Science Education Initiative. The analysis was conducted under the Institutional Review Board protocol 0108.9 (exempt status).