Student learning: What has instruction got to do with it?

This review article from two Carnegie Mellon cognitive scientists surveys several decades of empirical research on one of the most contested questions in education: how much instructional guidance should teachers provide? The paper maps the terrain between two poles — discovery learning (minimal guidance, students construct knowledge themselves) and direct instruction (explicit guidance, worked examples, step-by-step instruction) — and examines what the evidence actually shows about when each works, for whom, and under what conditions.
The authors begin by acknowledging the genuine appeal of discovery learning. Constructivist arguments, rooted in Piaget, hold that understanding comes through active knowledge construction rather than passive reception, and there is some evidence that when students generate their own solutions they remember them better and show stronger conceptual understanding. A handful of well-designed classroom studies — particularly in mathematics — show benefits when students invent their own procedures and discuss them with peers. The key caveat is that discovery learning tends to work only when combined with high levels of practice and sufficient acquisition time. Without that consolidation, students flounder, fail to identify the principles they were supposed to learn, and sometimes develop incorrect understandings that are hard to correct.
The weight of the empirical evidence, however, sits on the direct instruction side, and the strongest evidence concerns worked examples. When learners are given step-by-step problem solutions to study, they consistently outperform those who simply practice problem solving, particularly in early phases of learning. The explanation draws on cognitive load theory: novice learners have limited working memory, and unguided problem solving consumes that capacity in inefficient search rather than in learning. Worked examples free up working memory by removing the search burden, allowing learners to focus on the underlying structure of the problem. This advantage, however, erodes as learners gain expertise — a phenomenon called the “expertise reversal effect.” For more experienced learners, worked examples can actually become redundant or even counterproductive, and problem-solving practice produces better outcomes.
The paper explores several factors that modulate the effectiveness of instruction. Prior knowledge is decisive: the same instructional method can help novices and hinder experts. Self-explanation — prompting students to explain the steps in a worked example to themselves — consistently improves learning, particularly for transfer to novel problems. Comparison of alternative solution methods enhances procedural flexibility, but only when students have sufficient prior knowledge to make meaningful analogies between the examples. Adding verbal explanations to worked examples has mixed effects: they help when well-integrated into the example, but create a split-attention problem and additional cognitive load when presented as separate information sources that students must integrate.
The final section reviews emerging approaches that attempt to combine the strengths of both traditions. Fading procedures — starting with fully worked examples and gradually withdrawing steps as learners gain competence — have shown strong results. The “invention followed by instruction” approach (sometimes called productive failure) has students attempt to generate solutions first, then receive direct instruction. This sequence appears to prepare learners to better understand and apply the subsequent instruction, even though the initial attempts typically fail. The authors conclude that learning in problem-solving domains is fundamentally example-based at its core — verbal instruction and discovery activity both derive most of their power from helping students understand examples, whether provided externally or generated through the process itself.