What is unique about the researcher’s role in measuring blended learning success?
by Saro Mohammed on Jan 11, 2017
In previous posts, I illustrated to edTech developers and educators how they can contribute to our shared understanding of if, when, and how blended learning is effective through TLA’s Measurement Agenda for Blended Learning. In this post, I focus on researchers, and how their role can be expanded out of the traditional boundaries of “evidence curation” in to dissemination, competence-building, and implementation.
Moving Evidence into Implementation
Researchers are no strangers to the work of uncovering and curating evidence to improve our understanding about a theory or practice. In fact, some may find it surprising that TLA has released a call to action to researchers for this work. As a researcher myself, I understand our focus on what I call the evidence cycle. Our world revolves around uncovering, documenting, and interpreting findings largely for other researchers, to whom our primary dissemination channels are geared. We clearly see ourselves in the Measurement Learning Agenda, and to some extent, in the Measurement Dissemination Goals.
When discussing the challenges the sector faces in incorporating evidence into practice, I make the case that researchers have a critical role to play in all four parts of the measurement agenda, including the Measurement Competency Standards, and Measurement Implementation Objectives. Because we are perceived as the primary measurement professionals in this sector, researchers are in the unique position of being able to drive the implementation of evidence-based practices at scale in a sustained way. In fact, I challenge all of us to do our part in unifying the evidence cycle with the implementation cycle to complete the journey of evidence from research (generation) to practice (implementation).\
What’s different about researching blended learning?
While all social science research, of which education is a part, is “applied” research to some extent, blended and personalized learning presents a different context, which requires a different approach. Instructional innovations grow and change at a rate that far outpaces our traditional research and dissemination methods. In order to keep up, we as researchers must do several things:
Use an applied lens when designing and conducting new studies. This includes focusing new hypotheses on identified gaps in the sector; studying contexts and constituents as mediators or moderators of effectiveness; as well as being clear, precise, and explicit about the practices that are the focus of study. A continued focus on the ongoing gaps in our knowledge about the effectiveness of blended and personalized learning requires progressing toward studies of implementation, and even applied studies, rather than basic research (e.g., studying strategy implementation in real life settings, or in combination with one another). We can also increasingly conduct studies using designs that are more relevant to innovative, constantly changing, instructional practices (e.g., design-based implementation research, implementation science, continuous improvement measurement cycles, or design studies); rather than primarily relying on inflexible, more rigid designs.
Understand that dissemination and interpretation of findings is also our job. We, many times, have the answers to practitioners’ most pressing questions - so we should spend more time ensuring that we share these answers with them, in a timely manner, in ways that they can use to improve practice. Often, this requires sharing findings in less traditional ways than academic journal publications. Researchers can blog; present the implications of their work at professional development seminars; or host regular podcasts or recorded webinars, for example.
Support other stakeholders in developing skills in applying evidence to practice. Often, it is easier for us to simply tell other stakeholders what we would do, or what they should do, in answering problems of practice. Spending the extra resources on helping others to develop these skills themselves benefits the sector exponentially. Not every practitioner needs to become a researcher, but we all benefit when we are all more sophisticated consumers of evidence.
Look to implementation to see what more needs to be studied. In addition to generating new hypotheses and designing new studies based on problems of practice, this may include developing measures for new constructs, or identifying and studying the nuances of blended learning as a context for learning, rather than an intervention or treatment in and of itself. Let’s keep our ears to the ground when planning new studies, so our work is relevant to problems of practice, rather than purely theoretical ones.
Researchers can find out more about relevant measurement objectives for this expanded concept of their role in our Blended Learning Measurement Agenda for Researchers.
Call to Action
Of course, researchers do not shoulder this burden alone, no matter how much that may seem to be the prevailing expectation. This is why The Measurement Agenda also includes specific, detailed objectives for educators, administrators and policymakers, funders, and other community members, like edTech developers. All five stakeholder groups have critical roles to play in advancing our understanding, and implementation, of blended learning. Join me next time for a focus on funders and how they too can contribute to this work and ensure that we are doing all we can to implement evidence-based practices that support all learners.
If you are a member of any of these stakeholder groups, and are interested in learning more about how you can contribute, please stay tuned for future blog posts, or visit our Blended Learning Measurement Agenda landing page for more information. If you are engaged in work that illustrates or aligns with objectives outlined in our Measurement Agenda, please let me know @EdResearchWorks, #BLMeasurement or email me firstname.lastname@example.org.