DataResults blog

Blended Learning Data: Moving beyond binary results

Blended Learning Data: Moving beyond binary results

“For every 20 minutes a student spent [on our software], their MAP score increased by 2.5 points.”

“Ensure that students see two years of reading growth [if they use our program].”

“Our study has found that use of this math program with fidelity has roughly the positive impact of a second-year teacher versus a first-year teacher.”

In the world of managing education technology, I’ve seen quotes likes these time and again. (In these cases, respectively, in a white paper on a vendor’s site; in a solicitation email directly to a principal; and when sitting in a meeting of a research collaborative.) I can’t condemn them; in a crowded marketplace selling to an increasingly data-driven consumer, I tend to take these statements at face value. I do believe that the folks studying the impact of education technology – either for for-profit vendors or at third-party research institutions – have done so with integrity and are using reasonable data to share these results. However, I’ve seen that marketing such binary results (it works, or it doesn’t) often leads practitioners to believe that education technology tools can be a panacea – plug-and-play solutions that can solve larger problems. As educators, we must push beyond the binary lens and instead rigorously examine how these tools, alongside and in service of positive teacher practices, can be most effective in improving student outcomes.

Let’s take the last quote as an example:

“Our study has found that use of this math program with fidelity has roughly the positive impact of a second-year teacher versus a first-year teacher.”

This quote, as mentioned, was shared at a research meeting a couple of years ago. This was a study of over 200,000 students nationwide enrolled in public districts and charter organizations, and represented a remarkably positive result in support of the math program in question. And yet, my Aspire teammates and I saw almost no correlation between our own students’ success in this program and growth on our state assessment. While this data was incredibly compelling on a national scale, advocating for continued usage with these national results didn’t feel right given what we knew about the practices and the results in our very own schools.

Reflecting on these moments, I’ve become convinced that it’s our responsibility as practitioners to seek out the use cases that are producing the most positive results our students deserve. This is in line with what Bryk, Gomez, Gruno, and LeMahieu have outlined in Learning to Improve (2016): we can “focus on variation in performance” in order to isolate and scale best practices in the work.

I was fortunate to have the opportunity to leverage our blended learning data to investigate this as an Agency Fellow participating in the Harvard Center for Education Policy Research’s Strategic Data Project (SDP). My initial focus was to produce exactly what I’d seen before – a study of a math blended program that was implemented across our network in order to determine if it worked or if it didn’t. I was struck once again by results that looked positive averaged across our schools, but that carried huge variation across schools and classrooms, as shown in Figure 1.

KH_graph1.png#asset:891

Equipped with a desire to go deeper and coaching from my SDP mentors, I was able to split up the aggregated data and look, class by class, at which of our classroom teachers were best supporting and best supported by this blended learning program. Using multiple years of assessment data, I established an average growth trajectory for an Aspire student (i.e., the average growth on our state assessment for a student in our schools as they matriculated from third through fifth grade). Then, I could isolate individual classrooms of users that ended up achieving far above the average growth seen across Aspire, which I called “Performance Relative to Expectations” (Figure 2). I used these data in conversations with region- and site-based coaches, which then led to observations and reflections geared towards identifying what practices teachers were implementing in those classrooms that seemed to increase both teacher and program impact.

KH_graph2.png#asset:892

While we are still in the early stages of the work, it has been validating (if somewhat unsurprising) to hear that our coaches identified teacher practices like frequent data check-ins with students and strong small-group instruction as common themes in those classes. What’s more, we transformed our conversations around blended learning data with this qualitative lens layered on top of quantitative data; I’ve noticed our site and regional leaders in integrated technology asking more questions around improvement and coaching around technology in their classrooms after utilizing this approach.

Moving forward, I remain interested in vendors supported by large-scale data – that work continues to provide indicators around what has the potential to help move the needle for our students. But as we in education technology leadership bring these products and programs to principals, I suggest we are careful in how we frame them.

“If you’re interested in trying it at your site, how can we test to know what’s working and what’s not?” By partnering with our instructional experts around that question, I have no doubt we can achieve the results as advertised, if not greater.

Kevin Hoffman

Kevin Hoffman

About the Author

Kevin Hoffman works with Aspire Public Schools as Manager of Innovative Learning.

Let’s Connect

Looking for more information? Email us at

info@learningaccelerator.org

Have a media inquiry? Contact Lacey Gonzales at

lacey.gonzales@learningaccelerator.org

411 Congress St
Office 403
Portland, ME 04101

 Discover More: Follow Us


    Skip to content