Beyond inquiry or direct instruction: Pressing issues for designing impactful science learning opportunities.

This paper is a rebuttal rather than a primary empirical study, which makes its structure and purpose somewhat different from the others in the collection. It is a response by a large, internationally assembled group of learning scientists to a prior response by Sweller and colleagues (2024), who had themselves been responding to de Jong et al.’s 2023 paper arguing for combining inquiry-based and direct instruction. To follow the chain: Zhang et al. (2022) claimed the evidence firmly supported direct instruction over inquiry learning; de Jong et al. (2023) rebutted that claim and called for combining the approaches; Sweller et al. (2024) doubled down on the direct instruction position; and this paper is the final reply.
The opening move is to restate the authors’ core position clearly: the question of which single instructional method is superior is the wrong question. Both inquiry-based learning and direct instruction can be effective depending on topic, instructional goal, student characteristics, teacher capability, and the quality of instructional design. Real classrooms and well-designed curricula almost always involve a mixture of telling and discovering, and the productive research question is about how to orchestrate that mixture well, not about declaring a winner.
The paper then addresses three specific charges made by Sweller et al. The first charge was that inquiry-based learning lacks theoretical grounding, whereas direct instruction rests on the solid foundation of cognitive load theory. De Jong and colleagues reject this firmly. They trace inquiry-based learning through both cognitive and sociocultural traditions — generative learning theory, schema (re)construction, Vygotsky, Dewey, situated learning, communities of practice — and argue that Cognitive Load Theory does not uniquely entail direct instruction. CLT simply requires that cognitive load stays within learners’ capacity; that condition can be met through well-scaffolded inquiry as well as through explicit instruction. A study of low-prior-knowledge students learning about electrical circuits via simulation is cited to show that successful inquiry learners managed their own cognitive load by choosing simpler starting points and pausing appropriately — without any direct instruction.
The second charge concerned evidence: Sweller et al. claimed de Jong et al. had cherry-picked studies favourable to inquiry and ignored randomised controlled trials supporting direct instruction. The rebuttal demonstrates that their 2023 review explicitly covered controlled studies, program-based studies and correlational evidence — the same three categories used by Zhang et al. — and that across all three the weight of evidence favours guided inquiry over direct instruction for deep conceptual understanding. They cite several large-scale meta-analyses: Minner et al. (2010) found 55% of relevant controlled studies showed significant advantages for inquiry, versus 2% showing advantages for less inquiry-focused instruction; Furtak et al. (2012) found a positive overall effect of inquiry methods across 37 experimental and quasi-experimental studies; Alfieri et al. (2011), in a meta-analysis of 164 studies, found assisted inquiry more effective than explicit instruction. Two recent large randomised controlled trials of project-based learning programs in US secondary schools (Schneider et al., 2022; Krajcik et al., 2022), each involving more than 2,000 students, showed advantages of project-based conditions over traditional instruction of roughly 0.2 standard deviations on independently developed summative assessments. The authors also point out that studies cited by Zhang et al. to support direct instruction frequently compared it to unguided or minimally guided discovery — a condition that no contemporary inquiry advocate supports — making those comparisons uninformative about the actual debate.
The third issue concerns meta-analyses as a form of evidence. Sweller et al. had expressed general scepticism about meta-analyses. De Jong and colleagues acknowledge that meta-analytical findings do not translate automatically to complex classrooms, but argue that meta-analyses are still more robust than the selective citation of individual studies with no transparent inclusion criteria — which is what Zhang et al. (2022) appeared to do.
The paper ends by restating the productive agenda: rather than debating which method wins, the field should investigate under what conditions, for which learners, and for which learning goals different orchestrations of direct instruction and inquiry produce the best results. That, the authors argue, is what advances instructional design.