Back to all news

Insights from Academic Research

Teacher Voices | April 9, 2019

MƒA knows that many teachers have an interest in learning about current and foundational education research in their content area – and who better to provide this insight than a practicing Master Teacher. Over the past school year, Doug Shuman, two-time MƒA Master Teacher and mathematics teacher at Brooklyn Technical High School, has summarized key mathematics pedagogy research articles. We hope this has served as a valuable resource for both our teacher community and the greater education community.


Summer 2019

Article of Focus:

Kirschner, Paul, Sweller, John, and Clark, Richard, Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching, Educational Psychologist (2006), Vol.41, No. 2, Pg. 75-86.

Summary:

Ultimately how much instructional guidance is best will be determined by the goals of our education system. If the goal is to maximize short-term content mastery, the authors of this paper make a compelling argument that direct guidance is superior to minimally guided instruction: minimally guided learning consumes much of the brain’s resources and strains its architecture. But if another goal is for students to think analytically and critically, they will need to practice this skill, inefficiently assembling their knowledge while learning how to assemble their knowledge, and quite possibly sacrificing a precious point or two on the next standardized test. As educators, most of us see this as a small price to pay so that students may learn to teach themselves. Unfortunately, policy makers, and these authors, aren’t there yet.

In their 2006 article, Paul Kirschner, John Sweller, and Richard Clark took on the long-running debate between minimal instructional guidance and “strongly guided instruction” for novice learners. Despite what sometimes feels like a series of condescending swipes at inquiry learning, the authors ultimately make an important point for mediated instruction. The argument constructed against minimally guided learning finds its basis in current theories of human cognitive architecture. Kirschner et al. relish pointing out hyperbolically that “minimally guided instruction appears to proceed with no reference to the characteristics of working memory, long-term memory, or the intricate relations between them.”

According to the authors, minimally guided instruction places a double burden on working memory. Novice learners first need to engage in “problem-based searching” for “problem-relevant information” before new content can be isolated and sorted through in working memory. Furthermore, they claim that during this process long-term memory cannot be accessed for learning. The result is that the “cognitive load” placed on working memory exceeds the brain’s ability to efficiently and effectively alter long-term memory.

The authors suggest two interventions to increase retention of new content in long-term memory. The first, “Worked Examples,” involves students studying already solved problems, rather than having them struggle to solve unfamiliar problems. A second intervention is called “Process Worksheets.” This approach evidently includes the graphic organizers that many of us have seen and used. Like most of you, my reaction to these being the centerpiece interventions of this article was immediate and unpleasant.

The authors discount minimally guided instruction without addressing its benefits beyond the boring interventions they offer. Namely, that students who learn in an inquiry environment also learn how to inquire, which is both useful and engaging. In the authors’ view, there is no role for inquiry because learning how to inquire gets in the way of learning new material. This conclusion stems from the authors definition of success: content mastery as displayed on standard assessments given shortly after instruction. In her seminal 1998 article, “Open and Closed Mathematics, Student Experiences and Understandings,” Jo Boaler presented the alternative, and more important, result that students who learned how to approach math problems with an inquiry mindset were more willing and better able to interrogate and solve unfamiliar problems after a significant period of non-exposure than students taught in a traditional setting.

In my Algebra I classroom, inquiry and direct instruction are just two tools in my pedagogical toolbox. Cognitive load theory warns us that the capacity of working memory is severely limited. Yet by designing investigative tasks that are appropriately scaffolded and interesting, students can construct their own knowledge. Day 1 of my quadratic functions unit is an investigation into what causes a parabola to form a ‘U’.  Do students struggle? Absolutely. Do they become familiar with the grace and symmetry of the parabola without acquiring any skills that will directly improve their score on the New York State Algebra I Regents Exam? Guilty. A week later I teach them exactly how to complete the square to find the roots of a quadratic function. Do students efficiently learn an important procedure, ever-present on the Regents Exam? Absolutely. Does their previously acquired understanding of the symmetry of the parabola contribute to their quickly grasping the presence of the newly introduced symbol, ‘±’? You bet.

Focusing exclusively on the cognitive load that inquiry learning places on working memory ignores another important ingredient in successful learning. We learn more effectively when we give meaning to what we’re learning. Meaning can come from many places, not just the strict interpretation that meaning is equivalent to metacognition. I subscribe to the notion that allowing students to arrive at their own conclusions gives an emotional and psychological punch to their learning that is absent from direct instruction. As Boaler demonstrated, when students control the direction of their learning, and learn how to approach unfamiliar problems, the learning is more “usable”, versus the more “inert” learning inherent in direct instruction.

To prepare our students in the weeks before the New York State Algebra I Regents Exam most of us employ efficient techniques these authors would applaud. We review model solutions and rehearse procedures. But to their credit, the creators of the exam sometimes manage to write one or two constructed response questions that are well-crafted, imaginative and unfamiliar to students. It takes experience and practice to navigate this terrain. In the end, this paper is a warning that relying solely on pure inquiry-based learning is an inefficient way to learn content. But measuring educational efficiency solely by the amount of pure content learned for a single assessment is like measuring how much a person can eat in a lifetime by the amount of fish you’ve given them today.


Spring 2019

Article of Focus:

Burger, William F. and Shaughnessy, J. Michael, Characterizing the van Hiele Levels of Development in Geometry, Journal for Research in Mathematics Education, Vol. 17, No. 1 (January, 1986), pp. 31- 48. NCTM.

Summary:

We’ve all seen geometry students’ thinking transform over the course of the year. Often, however, students struggle with some idea or concept that seems obvious to us. It can be frustrating to help students because they can’t see a relationship or generalize its properties even when it feels like we’re giving away the answer. This phenomenon was extensively explored forty years ago by the husband-wife pair of Dutch education researchers Dina van Hiele-Geldof and Pierre van Hiele. They developed a theory involving levels of thinking that they asserted students must pass through sequentially. These levels have come to be called the van Hiele Levels of geometric reasoning:

  • Level 0 (Visualization). The student reasons about basic geometric concepts by means of visual considerations of the concept as a whole without explicit regard to properties of its components.
  • Level 1 (Analysis). The student reasons about geometric concepts by means of an informal analysis of component parts and attributes.
  • Level 2 (Abstraction). The student logically orders the properties of concepts, forms abstract definitions, and can distinguish between the necessity and sufficiency of a set of properties in determining a concept.
  • Level 3 (Deduction). The student reasons formally within the context of a mathematical system, complete with undefined terms, an underlying logical system, definitions, and theorems.
  • Level 4 (Rigor). The student can compare systems based on different axioms and can study various geometries in the absence of concrete models.

The van Hieles’ research sparked interest but drew criticism over the difficulty in assessing appropriate student levels. In their 1986 paper, “Characterizing the van Hiele Levels of Development in Geometry,” William Burger and J. Michael Shaughnessy attempted to validate the van Hiele levels and develop a set of indicators that would allow teachers to assign students to levels.

They did this by creating a set of tasks and interview questions given to 45 Oregon students from Kindergarten through college. Researchers observed, interviewed and recorded students as they worked individually on tasks. The tasks were of different levels of cognitive demand, given the variety of the students’ ages. The tasks involved drawing shapes, identifying and defining shapes, sorting shapes, and engaging in both informal and formal reasoning about geometric shapes.

When the analysis was completed the researchers were left with a catalog of indicators that can be used to assign a van Hiele level. For example, “inability to conceive of an infinite variety of types of shapes,” is an indicator of Level 0 (Visualization). Burger and Shaughnessy found that the van Hiele levels are generally descriptive of student performance but that they are less hierarchical and more dynamic, allowing students to be at different levels for different tasks, alternating between two levels on a given task as their thinking develops, and bi-directional when regression occurs over time without practice.

Using these indicators does not require a commitment to yet another assessment system. It is overkill to assign a van Hiele level to every student on every task. Rather, a good read of the indicators coupled with thoughtful reflection on what difficulties your students may have with a specific topic may lead you to better focus your pedagogy. For example, on a given topic one might assume that students are able to explicitly reference definitions (Level 2, Abstraction), when, in fact, they are focused more on assessing and comparing various properties (Level 1 Analysis). A thoughtful activity might begin with their ability to list various properties and then develop minimum sufficient conditions for classifying the shape.

A striking feature of the list of indicators is how language-based the differences between the levels are. Most of the distinctions come down to differentiating between relevant and irrelevant properties, and then developing abstract classifications that lead to verifying conjecture through formal deduction. Students can work through these phases by discussing concepts in a structured setting. Providing opportunities for conversational verbalization and explication is a great way for students to solidify understanding and transition to the next level.

Berger and Shaughnessy make the observation that the levels may be less discrete than fluid, especially between Level 1 (Analysis) and Level 2 (Abstraction), a fact particularly relevant to high school geometry teachers. Most geometry texts and curricula merge these two levels into one, with more focus on analysis than abstraction, assuming that once a basic level of explication and classification has been achieved, students are able to move quickly to Level 3 (Deduction). More time spent on abstraction, (eg. Always, Sometimes, Never exercises) is an investment that pays for itself.

My final thought on this paper is a word of warning: Since you and your students are on different levels, many will not be able to understand your line of thinking, even if you explain it to them. Eager to advance, some students will resort to memorization, a sure sign that your pedagogy is misaligned with their van Hiele level. Abiding this will only lead to mutual frustration.


Fall 2018

Article of Focus:

Kara Jackson, Anne Garrison, Jonee Wilson, Lynsey Gibbons and Emily Shahan, Exploring Relationships Between Setting Up Complex Tasks and Opportunities to Learn in Concluding Whole-Class Discussions in Middle-Grades Mathematics Instruction, Journal for Research in Mathematics Education, Vol. 44, No. 4 (July 2013), pp. 646-682, National Council of Teachers of Mathematics

Summary:

You spent hours developing today’s lesson, a unique and interesting problem steeped in great mathematics and brought to life by an interesting and funny context. You handed it out with a short introduction and set the kids to work, some in earnest, some not so much, and some with apparent confusion. With a few minutes left, and after seeing most groups come to some sort of conclusion, you brought the group back together to close the lesson in a crescendo of brilliant discourse. You asked your first question, a question that deftly balances procedural fluency with conceptual understanding. You waited for the hands to fly up and… crickets. Students looked blankly at you as though what you just asked had nothing whatsoever to do with the number crunching they just completed. Has this ever happened to you?

Debriefing after an activity is often the most difficult and least successful part of a lesson. Especially lessons that are demanding in nature, built in context, and require more than procedural fluency. Following Mary Kay Stein’s groundbreaking research on implementing cognitively demanding tasks in the middle school math classroom, the authors Kara Jackson, Anne Garrison, Jonee Wilson, Lynsey Gibbons and Emily Shahan have taken a close look at the whole class discussion phase in their article Exploring Relationships Between Setting Up Complex Tasks and Opportunities to Learn in Concluding Whole-Class Discussions in Middle-Grades Mathematics Instruction, published in the Journal for Research in Mathematics Education in 2013.

This often-cited article rigorously investigates the links between the setting up of a task before students are set free to work and the quality of the ensuing whole-class discussion.  In particular, the authors seek to identify high-leverage practices that teachers can develop that lead to increased student engagement in the task as well as increased learning from the concluding discussion. The authors adopt Stein’s four aspects of high quality task set up:

  1. Key contextual features of the task scenario are explicitly discussed.
  2. Key mathematical ideas and relationships are explicitly discussed.
  3. Common language is developed to describe contextual features, mathematical ideas and relationships.
  4. The cognitive demand of the task is maintained over the course of the setup.

By meticulously coding components of videos from 460 lessons implemented by 165 different teachers over two years from 2009-2011, the authors sought to establish a correlational link between the quality of the different aspects of task set up with the quality of the ensuing whole class discussion. The measures of the whole class discussion were grouped into three categories: its academic rigor as maintained by both the teacher and the students, the degree to which students’ responses linked and built upon one another, and the degree to which students supported their contribution with conceptual evidence.

The authors found a positive relationship between the quality of the introduction of mathematical relationships in the set-up and the quality of the whole class discussion. In particular, they found the strength of the relationship increased when the initial discussion was formally orchestrated rather than ad-hoc. They also found that students made more connections to each other’s ideas and provided more conceptual evidence in support of their ideas when the contextual features of the scenario were clearly established and discussed during set up. Finally, they found that teachers paid more and better attention to setting up the mathematical rather than the contextual features of the problem. When they paid quality attention to both, discussions after the task were of higher quality. A common feature running through all of these results was the influence of having broad and active student participation in the set-up, to gauge student understanding but more importantly so students could develop a common language to discuss the mathematical and contextual features of the task.

How can you use task set-up to have a better closing discussion?

  1. Make the time for task set-up. Don’t assume that reading the instructions will be sufficient.
  2. Always carefully plan how you will introduce a task. Think through both the mathematical and contextual features.
  3. Orchestrate discussion about the mathematical and contextual features during set up. Let students come to mutual understanding about what math they are about to use and in what context.
  4. Pose some big, open-ended questions during the set up and leave them up during the task. Let students know that these are the big ideas you’ll be circling back to in the closing.
  5. Let students discuss big ideas at their tables before bringing the group together as a whole so they can voice their ideas in a small group before sharing them with the whole class.

Doug Shuman teaches Algebra and AP Statistics at Brooklyn Technical High School and the Math Methods sequence at Hunter College. He is in his second MƒA Master Teacher Fellowship. At MƒA, he facilitates courses on modeling in Algebra 1 and AP Statistics and is a Fund for Teachers grant recipient.