The new PISA study is prima facie a massive blow to educational reform ideology and a strong vindication of traditionalist advocates of direct instruction. The data (from this report, 228), seems to show that teacher-directed instruction is associated with success even more than the school’s socio-economic profile, while enquiry-based learning is a surer way to fail than skipping class:
If, like me, you find the face-value implications of this evidence rather depressing, you may want to look for some way of explaining them away. That’s what I tried to do, and here is what I found.
First we may ask ourselves: Do the test scores really measure quality of learning? Maybe direct instruction is a good way of “teaching to the test,” leading to good scores on artificial standardised tests that do not really measure what we really aim for in education. It seems to me that this is not a convincing answer in this case. Judging by the published samples of actual test items used, the questions seem sound and certainly like the kind of thing one would want students to know. They don’t seem at all focussed on “cram”-type knowledge, like standardised vocabulary and rote calculations. If anything, they seem like exactly the kinds of questions advocates of enquiry-based learning would prefer. So we got beaten at our own game, as it were.
Next we can ask ourselves: How does PISA define and measure teacher-directed and enquiry-based learning anyway? Again, this seems to have been done in a very reasonable way that does not leave any room for invalidating the findings on methodological grounds.
To measure how teacher-directed a class was, “PISA asked students how frequently … the following events happen in their science lessons: The teacher explains scientific ideas; A whole class discussion takes place with the teacher; The teacher discusses our questions; and The teacher demonstrates an idea.” (63)
Enquiry-based was measured instead by the following statements: “Students are given opportunities to explain their ideas”; “Students spend time in the laboratory doing practical experiments”; “Students are required to argue about science questions”; “Students are asked to draw conclusions from an experiment they have conducted”; “The teacher explains how a science idea can be applied to a number of different phenomena”; “Students are allowed to design their own experiments”; “There is a class debate about investigations”; “The teacher clearly explains the relevance of science concepts to our lives”; and “Students are asked to do an investigation to test ideas.” (69)
This seems quite in order. In fact, once the analysis is split into these sub-statements, the case for direct instruction is even much stronger than the above table suggests. For it is the most direct-instruction-y part of each group that works best, and the most equiry-y part that is least successful, as we see in the tables on pages 65 and 73.
We may also ask: what is “adaptive instruction”? This sounds like something reformers would approve of, and it is highly correlated with success. However, once we look into the details it is not so uplifting: in a nutshell, it seems “adaptive instruction” may in practice be more like the traditionalist tricks of “teaching to the test” and “dumbing it down,” for the statements PISA used for this measure were: “The teacher adapts the lesson to my class’s needs and knowledge”; “The teacher provides individual help when a student has difficulties understanding a topic or task”; and “The teacher changes the structure of the lesson on a topic that most students find difficult to understand”. (66)
Finally, we must ask ourselves whether the results highlighted in the table are false correlations somehow, or at least not causations. After all, we see in the table that “after-school study time” is strongly associated with negative scores. Surely this is a matter of correlation rather than causation: students who spend a lot of time studying outside of class are weaker on average, but this is not the reason they are weaker, one would hope. Could something like this hold for enquiry-based learning too? This would have to mean that weaker students are exposed to enquiry-based learning to a greater extent. Is there any evidence for this? I’m not sure. The study includes tables on which countries do most direct instruction (64) and which do most enquiry-based learning (72). It turns out that leaders in enquiry-based learning are not the self-proclaimed avant-garde in the rich West but rather the Dominican Republic, Peru, Jordan, Lebanon, Algeria, etc. The correlations control for other variables so this does not appear to be the whole explanation, unless these controls operate only on a within-country level, which seems a possibility (e.g., socio-economic profile seems to be defined relative to the country, not relative to the world). So it seems possible, but I can’t tell how likely, that the disastrous results for enquiry-based learning are due to between-country differences rather than within-country differences. If so, that would undermine the face-value implications of the data, since within-country differences would be much more relevant for pedagogical decision making (as it would better measure what happens when the two teaching methods are applied to comparable students).
More confusingly, these tables actually seem to show that the dichotomy of direct instruction versus enquiry-based learning is a false one according to this data. Because many countries do either lots of both or very little of each, which makes no sense if we are picturing it as and either-or situation. Korea, for instance, are dead last by a wide margin in the use of teacher-directed instruction, yet they are somehow also second to last on enquiry-based learning. What on earth are these Koreans doing in their classes then, if it’s neither one nor the other? Many other countries exhibit similarly paradoxical results. This suggests that this data is poorly suited for making judgements about one teaching style versus the other. For this purpose, it would have been better to have asked the students questions that forced them to pick a point on this continuum instead of asking them about each separately, in a non-exclusive way.
So there are some grounds for casting doubt on the data, but by and large I think we have to admit that this PISA report is very damning evidence for the fashion of the day in educational reform ideology.