A famous paper often cited as “proof” that “research shows” that “lecture doesn’t work” is Deslauriers, Schelew, & Wieman, Improved Learning in a Large-Enrollment Physics Class, Science, 2011. The study compares two large sections of a physics course: one with traditional lecturing and one with interactive group work etc., with the latter doing a lot better on a certain test.
As I have noted before, I find that a massive flaw of these kinds of studies is that they study only surface form, not substance. So also in this case: the study says absolutely nothing specific about what was actually taught in the traditional class. There are only some sweeping general assertions that it covered the same material. Likewise, we are told that there was a textbook, but not which book this was.
Thus, insofar as the traditional group did poorly, this could very plausibly be due to poor content by the lecturer or the textbook, or better content in the experimental group.
But educational “research” is blind to this possibility, as ever. Educational “research” is blind to content. It assumes that “traditional lecture” is a monolith. It assumes that to understand how to improve teaching it is irrelevant to look at the actual content of the classes; instead one should look only at the surface form in which it was delivered and draw causal conclusions from this alone. Such is the madness of the present study and every other in its field.
This would perhaps be enough on its own to commit these kinds of studies to the flames, but if you need further reasons there are plenty. Let us sample a few from this particular study.
First of all the experimental teaching was only applied for one week (a total of three hours of class time), which is obviously absurdly limited grounds for the far-ranging conclusions touted on the basis of this study. But in the crazy world of educational “research” a study of the effect of three hours of teaching is enough to get you published in Science, featured in the New York Times, etc., etc.
Then consider the actual test on which the experimental group did so much better. This was not an actual class test or final exam. Rather it was a voluntary 20-minute test. “The students were encouraged by e-mail and in class to try their best on the test and were told that it would be good practice for the final exam, but their performance on the test did not affect their course grade.”
Obviously it is not hard to imagine that many students in the traditional class who understood the material well chose not to take the test when it was pitched in this fashion. And indeed only about 64% of the control group students actually took the test. Thus it is perfectly plausible that the scores of the students from the traditional class drastically misrepresent the learning outcomes of that group as a whole.
Finally, this is a gem of a sentence: “To avoid student resistance, at the beginning of the first [experimental] class, several minutes were used to explain to students why the material was being taught this way and how research showed that this approach would increase their learning.” Imagine the same being done in a medical study: instead of simply testing each treatment to see which works better, the researchers explicitly propagandise to the participants in advance that they already know that the experimental treatment is proven to be better. It’s just madness. For “to avoid student resistance” one should read “to guarantee a massive placebo effect.”
Note, incidentally, the deterministic language in that quotation: “how research showed that this approach would increase their learning.” Somehow, in the minds of the researchers, this is construed as an objective fact. This is how educational “research” works: one massively biased study after another underwrites the results that the “research” community already religiously believed in from the outset, and then, citing each other, they can triumphantly claim that “research shows” that these biases and preconceived notions of theirs are objectively true as a matter of empirical fact.