How should schools be teaching math?
Traditional teacher-directed instruction is associated with higher math scores.
About seven years ago, some friends who had high school-age children urged me to attend a meeting to discuss our district’s math curriculum, called College Preparatory Math (CPM). Until then I hadn’t given math instruction much thought. But what friends told me was worrying: the CPM curriculum, which is based on the idea of “discovery” or “inquiry learning,” minimized explicit instruction by the teacher, to the degree that teachers often refused to answer students’ questions. This led to frustration by students who previously enjoyed math. Most work was done in groups of four students, up to and including exams, with predictable reactions from the students who felt they were carrying the burden. And worst, our district’s proficiency rates on state exams were very low: the majority of high schools scored in the lowest proficiency category, which represents “minimal” knowledge and mastery of the skills needed to progress to college.
That experience led me to run for school board, to push for new math (and reading) curricula and to otherwise try, with mixed success, to increase the rigor of K-12 instruction.
But, with my researchers’ hat on, I also became interested in what the data could tell us about how mathematics instruction in the classroom affects math proficiency. Which led to my first (real) study in education.
My then-AEI colleague John Mantus and I analyzed data from the 2012 PISA (Program for International Student Assessment). The PISA is a set of math, science and reading exams that are conducted across countries by the Organization for Economic Cooperation and Development (OECD). When you hear comparisons of U.S. students to kids in other countries, it’s often PISA results that are being referenced.
We looked to the 2012 PISA because, in addition to students’ test scores and demographic backgrounds, it included supplemental questions regarding how math was taught in their classrooms. We recognized that student backgrounds such as income and family structure were going to be important determinants of kids’ math scores. But we wanted to know whether and how much math instruction mattered.
The 2012 PISA included questions regarding a number of classroom instructional practices. For all of these teaching practices, students were asked whether the practice was used in every lesson, most lessons, some lessons, or never or hardly ever. The teaching practices we examined were placed into three broad categories, based on the OECD’s own designations.
Teacher-directed instruction
The teacher sets clear goals for our learning
The teacher asks me or my classmates to present our thinking or reasoning at some length
The teacher asks questions to check whether we have understood what was taught.
At the beginning of a lesson, the teacher presents a short summary of the previous lesson.
The teacher tells us what we have to learn
· Student-oriented instruction
The teacher gives different work to classmates who have difficulties learning and/or to those who can advance faster.
The teacher assigns projects that require at least one week to complete
The teacher has us work in small groups to come up with joint solutions to a problem or task.
The teacher asks us to help plan classroom activities or topics
· Formative assessment
The teacher tells me about how well I am doing in my mathematics class.
The teacher gives me feedback on my strengths and weaknesses in mathematics.
The teacher tells us what is expected of us when we get a test, quiz or assignment.
The teacher tells me what I need to do to become better in mathematics.
We included formative assessment because other research has shown it to be important to math performance.
Just as an example, in the United States 49% of students reported that group work was used in “every” or “most” math classes. In Korea, which at the time was the highest-scoring OECD country, only 14% said the same.
We analyzed the effects of specific instructional practices separately. However, we also showed that, while teacher-directed and student-oriented teaching practices could in theory be used on an ad hoc basis, they tended to be clustered together. That is, they’re reflective of a background instructional philosophy. We analyzed things in a variety of ways, but I can broadly summarize the results here (taken from Table 7 of the study).
Among teacher-directed instructional practices, Checks Understanding, Encourages Thinking and Reasoning, and Informs about Learning Goals were associated with higher PISA math scores, in that order of importance. Sets Clear Goals had no statistically significant effect, while Summarizes Previous Lessons was associated with lower math scores. This latter effect, which is counterintuitive, could be because less-skilled students required more frequent review of previous material.
Among student-oriented teaching practices, Students Plan Classroom Activities, Teacher Assigns Complex Projects, Differentiates Between Students When Giving Tasks and Has Students Work in Small Groups were all associated with lower PISA test scores, in that order of negative effect (i.e., allowing students to help plan classroom activities was most strongly associated with lower scores). The negative effect from classroom differentiation was somewhat counterintuitive, although perhaps the increased workload associated with differentiation reduced time available for more productive practices.
It should be remembered that the results above included controls for socioeconomic status, family structure, use of the test language at home and immigrant status, along with controls for all other teaching practices.
Since teacher-directed and student-oriented teaching practices tends to be used together, we also used factor analysis to create measures of each broad teaching style. This may be helpful for how teachers and schools think about the general philosophy that they should bring to the classroom.
Given the results above, it should not be surprising that we found that, on average, teacher-directed instruction is associated with higher PISA math exam scores than student-oriented instruction. We found that a one standard deviation increase in teacher-directed instruction corresponds with a 0.19 standard deviation increase in math scores, while a one standard deviation increase in the use of student-oriented mathematics instruction correlates with a decline in PISA exams scores of 0.27 standard deviations.
To make these effects more understandable, we compared them to how other more well-understood factors appear to affect math scores. For instance, the effects of teacher-directed and student-oriented instruction are comparable in size differences in a student’s socioeconomic status or the student living with both parents rather than a single parent, which most analysts believe have a significant influence on school achievement. The takeaway is that differences in math instruction are associated with differences in math proficiency scores that aren’t just statistically significant (although they are) but also big enough to matter.
To frame things in a different way, we compared teaching practices and exam scores across OECD countries. I can preview the results in this way: the ten countries with the lowest scores for student-oriented mathematics instruction, followed by their ranks (out of 62) in the 2012 PISA, were: Korea (4), Japan (5), Finland (10), Luxembourg (27), Hong Kong (2), Switzerland (7), Netherlands (8), Estonia (9), Poland (11), and Belgium (13). The ten countries with the highest scores for student-oriented mathematics instruction, followed by their test score ranks, include: Peru (61), Russia (32), Malaysia (49), Colombia (58), United Arab Emirates (45), Thailand (47), Bulgaria (44), Jordan (57), Qatar (58) and Kazakhstan (46).
Our results help resolve the so-called “Asian enigma,” in which Asian students score well on international exams despite, it’s claimed, Asian schools’ reliance on traditional instruction, including teacher-directed instruction, memorization and so on. What our results show is, first, within countries students who receive teacher-directed instruction exhibit higher math proficiency and, second, Asian countries appear to utilize more teacher-directed and less student-oriented instruction than lower-scoring countries.
Finally, we also replicated a previous OECD analysis that looked at how instructional approaches affected performance on individual PISA exam questions. The goal here was to look at the idea, found in other work, that teacher-directed instruction is more effective for novice or lower-performing students while student-oriented instruction is more appropriate for more experienced or higher-performing students.
It is true, as the OECD found, that the advantages of teacher-directed instruction are smaller for the most difficult PISA questions. (See Panel B below.) Based on this, the OECD concluded that a mix of teacher-directed and student-oriented instruction makes sense. (The OECD is perceived to have a strong preference for student-oriented instruction.)
However, the OECD did not perform the analysis shown in Panel A, which relates the effects of student-oriented instruction in students’ odds of correctly answering a given PISA exam question. Panel A shows that the effects of student-oriented instruction don’t differ very much by question difficulty.
But that’s not the result that matters. Comparing Panel B to Panel A it is clear that, across the spectrum of question difficulty, students receiving teacher-directed instruction are more likely to correctly answer PISA questions than students receiving student-oriented instruction. Yes, the advantage of teacher-directed instruction is smaller for more difficult math questions, but that advantage always exists and it’s always meaningful in size. For instance, even for the most difficult PISA exam questions, students receiving teacher-directed instruction were roughly 20% more likely to answer the question correctly than those receiving student-oriented instruction.
These results simplify the question of how math should be taught, since teachers and schools need not differentiate instructional styles by the mathematical aptitude of the students in a given classroom. Across the spectrum of PISA question difficulty, it appears that teacher-directed math instruction dominates student-oriented instruction. As the Michaela School’s Olivia Dyer puts it, “Just tell them.”
The larger question for me is whether research such as this can redirect K-12 math education toward more teacher-directed instruction. Based on my own experience as a school board member, that may be tricky. Following an unsuccessful experience with the inquiry-oriented College Preparatory Math, our district’s teacher working group opted to move to Illustrative Math, another inquiry-oriented curriculum that, some complain, is long on group work and open-ended problems and shorter on explicit instruction and student practice. Our school board eventually selected a different curriculum, Reveal Math, which isn’t a model of explicit instruction but, I hope, superior to the alternative. Nevertheless, my takeaway is that even years of experience using a student-centered math curriculum with little success doesn’t imply that schools will sour on the philosophy.
In this way, the nascent “science of math” movement is very similar to the “science of reading” movement, just a decade or so behind. In both cases, research appears to focus on explicit instruction followed by student practice. But in both cases, most Ed schools, popular education “experts,” textbook publishers and, yes, teachers dislike explicit approaches and favor more constructivist teaching, whether it’s whole language in reading or inquiry learning in math. Perhaps a large-scale randomized controlled trial would help clear up how best to teach math. I can’t imagine that’s more expensive for society than getting the answer wrong over many years.