Brookhart, S. (2010) How to Assess Higher-Order Thinking Skills in Your Classroom. Alexandria, VA: ASCD.
What is higher-order thinking?
–>Includes transfer, critical thinking, and problem solving
Transfer–requires that students not only remember what they have learned, but also be able to make sense of and use it. (p. 3, quoting Anderson & Krathwohl, 2011)
Transfer is contrasted with recall. It implies making novel (to the learner) use of knowledge and skills.
Critical Thinking–“is reasonable, reflective thinking that is focused on deciding what to believe or do.” (p. 4, quoting Norris & Ennis, 1989)
Requires the student to make a good decision/judgment, or to produced a well-thought out reaction/criticism.
Problem Solving--the student has a problem but does not automatically know how to solve it (ie does not recognize the path or solution s/he needs); must employ higher-order thinking skills to figure out the solution (p. 4, quoting Nitko and Brookhart, 2007)
It’s not a problem if you know the solution right off; not having a memorized or readily apparent answer is implicit in the term. Problems can be closed-ended or open-ended. We want to equip students to recognize problems in the world around them and to be able to work toward solutions. Interesting to note the contrast between problems set for students and problems that students set for themselves.
Chapter 1: General Principles of Assessment
–Specify exactly what you want to measure….. Not just what the student needs to know, but what the student needs to be able to DO with the knowledge. Describe your goals TO THE STUDENTS simply and clearly enough that THEY can understand what your goal for them is. Give examples of similar type problems and model how to solve them.
–Design an activity/test that requires the student to do the thing you’re measuring. Make sure there’s a clear match between what you’re asking students to do in the test/assessment and what you wanted them to be able to do. The author provides an example of how to make an assessment blueprint based on Bloom’s cognitive taxonomy; it is also helpful for figuring out how much weight to give to different parts of the assessment, and for designing rubrics.
–Figure out how you will know how well students did (or did not) do the thing you’re measuring.
Other key things:
–Use material as a stimulus–give them something to think about/react to. A question asked in a vacuum is more likely to test what students REMEMBER (rather than how well they can think/problem solve). Ask students to support their choices or arguments.
–Use material that’s new to the students; this supports thinking rather than merely recall. Preview the task for students, but not the content–that is, do similar work with them, but don’t cover the exact material that’s going to be on the test.
–Think about & control for level of difficulty and level of thinking SEPARATELY. You can have easy questions that require higher level thinking, and vice versa.
–To figure out what sort of thinking students will have to do to respond to a question ask yourself “how would I have to think in order to do this?” and “what would I have to think about?”, which will help to identify the thinking skills and content knowledge needed.
–Set criteria for student achievement, and measure progress against those criteria. Assessment can take the form of comments (formative) or scoring (summative). Eliciting info from students about their reasoning process can tell you a lot about their thinking skills. If you’re doing summative assessment, be careful–you can end up requiring higher-order thinking, but basing your grade on content knowledge rather than the higher-order thinking. The section on rubrics gives some useful examples of criteria used in different disciplines.
Cognitive taxonomies–classify learning objectives by level of complexity
Bloom (1956 & 2001)
2. Comprehension–understand (be able to retell in your own words)
3. Application–use info to solve new but similar problems that usually have only 1 correct answer
4. Analysis–break down information and re-synthesize it; can have more than one right answer
5. Evaluation–judge how useful/appropriate something is for various purposes
(the updated version reverses 5 & 6)
1. Remember–recognize/recall facts & concepts
2. Understand–basic comprehension (includes interpreting, exemplifying, classifying, summarizing, inferring, comparing, explaining)
3. Apply–use a procedure to find a solution
4. Analyze–break down information, look at how the parts are related to each other & the whole (includes differentiating, organizing, attributing)
Analysis assessment focused on questions or main ideas–the main idea can’t just BE there–students need to have to make inferences for analysis to be involved in the thinking process. Ask them how they came up with their answer, too.
To assess analysis of argument, ask students to identify the author’s arguments in favor, in opposition, the logical structure, assumptions underlying the argument, if anything is irrelevant…
Important note: good to keep in mind–> be sure to assess THINKING not writing; the author provides a good example of this pitfall with two essays that show that same level of analytical skill, but one is much better written than the other. Target feedback to the real issue–is it the writing or the thinking that needs work?
5. Evaluate–how useful is a material/method for what I want to do, and in light of the criteria given?
This means REASONED, critical evaluation; needs a thesis and supporting evidence/logic. Give students a text and ask them to evaluate its appropriateness according to a rubric or set of criteria.
6. Create–bringing together pieces of different things to create something new (includes generating, planning, and producing)
Marzano, Pickering, et al. (1993): Declarative knowledge, procedural knowledge, complex thinking, information processing, effective communication, cooperation, habits of mind
Marzano and Kendall (2007): Knowledge (information, mental procedures, psychomotor procedures) and types of thinking (retrieval, comprehension, analysis, knowledge utilization, metacognition, self-system thinking)
Feedback: Students need to understand what we’re telling them about how they thought (used what they know)–how they did, what they could do differently… Feedback on intermediate stages of work is important, too. The question isn’t “did I do it right?” but “did I reason/analyze/select evidence/explain myself soundly and clearly? do my procedures and thought processes hold up?” Spend less time and energy on summative assessment and more on formative/intermediate!
Self-assessment: Self-assessment is a key tool–students can ask themselves the same questions that the teacher will ask about their work, and assess their own learning. Bonus: self-assessment requires higher order thinking. It’s a skill that needs to be developed, and students need to be able to recognize the desired characteristics in their own work. Some will get it; others need to be taught more explicitly.
Logic and reasoning skills are a sine qua non of higher-order thinking.
Deduction: reasoning from principles to concrete examples (instances). Requires the ability to identify premises and assumptions (explicit or implicit).
Induction: reasoning from an instance to a principle. Includes reasoning from data & reasoning by analogy.
This chapter contains sample rubrics for evaluating the reasoning/logic in written work.
Discusses “good judgment”–what it is, why students need to develop it, how to assess it. This includes evaluating the credibility of sources, identifying implicit assumptions, and identifying rhetorical and persuasive strategies. A sample rubric is provided.
Problem-solving. Problems can be conceptualized as goals to be reached; they don’t reach the level of ‘problem’ if the student/person can automatically deal with something without conscious thought. Bransford & Stein (1984) suggest 5 subskills involved in problem-solving: Identify the problem, Define and represent the problem (includes an important point: the ability to pinpoint relevant and to disregard irrelevant information), Explore possible strategies, Act on the strategies, Look back and evaluate the effects.
Problems can be: structured vs. unstructured (on a continuum), goal-free (means doesn’t have just one required answer).
Creativity/Creative Thinking. We can conceptualize of this as being able to put things together in a way that will give someone else an “aha!” moment–we have created new insight. Some theorists conceive of creativity and the critical thinking that comes after it (do I like this? did I do it well?) as separate activities; others conceive of them as one and the same–ie, judging the usefulness/’goodness’ of your creation is part of the creative act.
Norris & Ennis (1989) conceive of thinking in general as: reasonable or unreasonable, productive or nonproductive, reflective or nonreflective, and evaluative or nonevaluative.
Afterword: pp. 146-7 has a diagram with some specific strategies for assessing different kinds of thinking.
Reflections on Assessment of Higher-Order Thinking (mine):
In my interpreting classes, I have an explicit goal of increasing students’ critical thinking skills. Of course, it’s also important for them to be able to transfer their learning from the classroom to the field. This goal tends to remain implicit, in that I don’t really talk about it with them, but is clearly fundamental to preparing students for a practice profession. Problem-solving is also critical; what is situational management but a big, ever-changing problem? There’s also the problem of balancing cognitive, internal, and external demands. This ties into Dean & Pollard’s exploration of Demand-Control Schemas in interpreting. In order to develop expertise in dialogue interpreting, one needs the critical thinking and problem-solving skills, as well as transfer of learned information, to allow one to quickly and effectively encounter a problem (demand), identify possible solutions, choose a solution, and implement it very quickly… and often while continuing the cognitive processes of listening & analysis/reformulation/production.
I assess student’s reflection and critical thinking in their journals, but am conscious of needing a framework for these assessments. I also need to make more effort to teach these skills systematically–this is difficult, as there’s already enough to teach–adding in critical thinking is necessary but not easy. I’m very aware that students with more academic and/or real life experience tend to do better in class. Students without any interpreting experience and/or any academic experience don’t achieve as much, and this needs to be addressed.
In reading the chapter on problem-solving, I immediately thought of Dean & Pollard’s work on demand-control schemas in interpreting. Interpreting can be thought of as one long problem-solving session; interpreting a triadic encounter introduces even more problems into the mix. One is solving linguistic problems as well as internal and external problems all at the same time. How can we help interpreting students to learn to engage in effective, on-the-spot problem solving? How do we help students to transition from the stage where everything is a problem to a higher level of skill acquisition/budding expertise in which fewer and fewer things rise to the level of ‘problem/demand’?