The Learning Accelerator Blog/Innovative Times Call for Innovative Measures

edtech innovation research & measurement

Innovative Times Call for Innovative Measures

by Saro Mohammed on April 24 2018

The randomized control trial (RCT) is the best tool education researchers have for understanding cause and effect, but there are times - like in blended learning settings - when this research design is undesirable, infeasible, or both. As I have previously noted, innovative instructional models like blended learning require the building of an evidence base that can inform practice. The good news is that there are approaches (research designs, methods, and techniques) that can be used to address some of the most common limitations of RCTs. These emerging approaches should and are more frequently being used by researchers working with practitioners to build a meaningful evidence base. An overview of these approaches is provided here, and more details about these approaches can be found in my introductory chapter in the second edition of the Handbook of Research on K-12 Online and Blended Learning.

Measurement in Emerging Learning Environments

RCTs are a rigorous, inflexible research design that allows us to minimize bias and maximize control over what is driving any changes in outcomes that we observe in the study. One of the biggest limitations of this design to practitioners is the randomly assigned, untreated, comparison group - in many cases schools and districts are unwilling or unable to randomly assign students to receive or not receive an intervention, or even to “withhold” the intervention from a group of students at all. Along these lines, it is often difficult or impossible for schools and districts to hold instruction constant for an extended period of time for both groups of students, while simultaneously ensuring that elements of the intervention do not “creep” into the comparison group or vice versa.

The regression discontinuity design (RDD) is an old, but traditionally unused, design that eliminates the need for a random comparison group - replacing it instead with a treatment group comprising students who have a demonstrated need for the intervention, and a comparison group of students who have demonstrated that they have no such need. RDDs themselves have limitations, of course, but are recognized by the What Works Clearinghouse as a design that meets their standards for a rigorous study of cause and effect. More detailed descriptions of all the techniques mentioned here, as well as examples of how many of them have been used in blended or personalized learning environments, are included in the Handbook of Research on K-12 Online and Blended Learning.

Another limitation of RCTs centers around the research questions being answered. Practitioners are often most interested in answering very specific, very local questions about problems of practice they face. They need these answers to be as straightforward as possible, maximally relevant to their own, current, students, and timely enough to make instructional decisions. Expensive, long, generalizable RCTs do not meet practitioners’ needs. Multiple approaches address different aspects of this mismatch between RCTs and problems of practice. For example, Bayesian analysis, and rapid cycle evaluation both still require a randomized comparison group to generate knowledge about cause and effect, but Bayesian analysis provides a “plain English” answer about effects (“given my data, the probability that my intervention had a real effect is x”) while rapid cycle evaluation isolates the intervention to a single instructional strategy at a time, reducing the cost, time, and complexity of finding answers about cause and effect.

Finally, RCTs characterize the “intervention” itself, as well as students’ potential learning, as being generally homogenous and consistent across contexts. The very notion of “fidelity” grounds the implementation of an intervention in faithfulness to a prescribed, or at least predetermined, model. Especially in settings like blended learning, instruction and students’ potential learning is flexible, rapidly changing, personalized, and unique. Improvement science and implementation research both use paradigms and research designs that can not only account for, but actually incorporate, the personalized nature of instructional practices within a blended environment. Approaches such as the Plan, Do, Study, Act cycle, or the RE-AIM framework take into account where and how interventions are implemented, and allow for the intervention itself to respond to students’ needs as the study is conducted.

Call to Action

We are making steady, if slow, progress in deepening our understanding of blended learning effectiveness when implemented in K-12. In a 2013 meta-analysis, Means et. al. found just 7 of the 45 included studies of blended or online learning focused exclusively on K-12 learners. Compare that to 2014, where the first edition of the Handbook of Research on K-12 Online and Blended Learning summarized closer to 30 studies; or compare to 2018, where this current edition summarizes even more. We are not limited by RCTs’ imperfections when further adding to this evidence-base. There are many techniques that can be brought to bear on meaningfully answering relevant, timely, important questions about blended learning effectiveness in K-12 settings. As our education system innovates to bring new, potentially transformational learning experiences to students, so must we innovate to document and build our collective understanding of what works best for which students under what conditions.

The Handbook

The Handbook of Research on K-12 Online and Blended Learning is a summary of what we currently know about blended learning efficacy and effectiveness. The second edition, published April 2018, provides an update to the 2014 edition by including chapters about research design, measurement, and evaluation in online and blended learning environments. It also updates the research across a range of content areas, grade levels, and school types. In the Handbook of Research on K-12 Online and Blended Learning, readers will also find recommendations for practice and future research.

About the Author

Saro Mohammed is a Partner at The Learning Accelerator. Email comments to saro.mohammed@learningaccelerator.org, and follow Saro @EdResearchWorks.