Blog Home

What have we learned about our approach to Evaluation and Adaptive Learning?

A reflection on 8 years of work

In 2015, a group of colleagues at R4D had an idea that we could use research methods to solve on-the-ground development challenges. We launched a practice — Evaluation and Adaptive Learning (EAL) — with a focus on using research to help people learn if, how, and why their programs and activities are working the way they want to work.

So, now that we have eight years under our belt, what’s shifted in how we view our work?

To answer that question, we revisited our EAL video launched 5 years ago as part of R4D’s 10 year anniversary to describe “what do we do, and why?” In this blog — and as part of our organization’s 15 year anniversary — we’ll explore how our thinking has evolved over time and with experience.

Play

From a focus on the need for new approaches to measurement in addition to impact evaluations and RCTs…to the recognition of and experience with complexity- and systems-aware approaches to monitoring, evaluation and learning (MEL).

The video kicks off with context around the need for new M&E approaches outside of traditional impact evaluations. But the question is, is this still the key problem?

Since launching the practice, we’ve seen significant shifts in the adaptive learning and MERL spaces. We’ve gone from saying we need to do research differently to here are concrete ways to do research differently.

In other words, we have more people with more experience with more methods, tools and approaches than ever before and a shift in the development landscape that generally acknowledges the importance of both accountability and learning.

There is consensus that we need to do MEL differently, but the question remains — how?

From using M&E to support learning for local partners…to centering local partners throughout the entire process from designing the questions, carrying out to the research, to translating findings.

In addition to outstanding questions around how to do MEL differently, we’re also hearing increasing questions around who should be doing the research.

As the development space increases its focus on localization, there is more discussion not just about how to put evidence in the hands of local partners, but how to also engage them as researchers in carrying out MEL activities.

Our EAL practice has always leaned on R4D’s collaborative learning approach to shift research from solely an accountability lens to one of learning. That has implicitly required an approach that engages the principles of co-creation and collaboration — but we are now actively pushing our work so that we’re not just centering the research to be of use for in-country decision-making, but that in-country actors are designing and carrying out the research.

From a list of Evaluation and Adaptive Learning methods in our toolkit…to a framework for how to get rigor right.

A cornerstone of our approach was in pairing the right feedback method with the level of certainty you have with the design and delivery of the program. We called it our famous “bubble chart” — “the more uncertain you are about the design of any program component, the less rigor you need, since failure is more likely than success.”

Our thinking has evolved to reflect the multiple dimensions of certainty, how to determine the appropriate level of rigor for a program’s design — and the right set of methods — all while recognizing that a program is along a criteria continuum.

Our recently published paper in the Global Health Journal shows 1) what are the questions to ask in determining the level of certainty, 2) how to identify a “best-fit” line that runs through the criteria, and 3) from there, what is the right adaptive learning method to apply.

From implementing impact evaluations after you feel more confident in program performance…to finding ways to use impact evaluations to support adaptive learning, or alongside it.

When we started the practice, we wanted to be careful not to position adaptive learning as an alternative to impact evaluation methodologies. Both were relevant and important approaches within the evidence generation space.

However, we emphasized how using adaptive learning methods — particularly earlier on in program design when you have more areas of uncertainty — could help position programs better for impact evaluation and get you that much closer to measuring real program impacts. We still stand behind this statement, and we’ve experienced other ways to work with and through impact evaluation methodologies.

For our Family Care First pilot in Cambodia, we designed two randomized experiments (which we call Feedback Experiments) to test approaches to social behavior change. And for our USAID/Senegal pilot to improve reading outcomes, designed and implemented an RCT to estimate the causal effect of community engagement activities.

In each case, we used randomization and experimental design to inform implementation decisions — using impact evaluation methods for adaptive learning purposes.

And with USAID/Mali, we have designed an impact evaluation alongside a series of adaptive learning activities — focusing the evaluation on the component of the program most likely to had demonstrated impacts, and the adaptive learning activities on areas of uncertainty in program design.

From using routine monitoring for programs that have scaled up…to using routine monitoring at all stages of program design.

Our original vision for the use of routine monitoring data was for programs that are in a position to collect ongoing administrative data to support regular performance monitoring. In our original framework, this was positioned as post-impact evaluation. However, we have seen ways to use routine monitoring data throughout the life of a program, including from very early piloting and design days.

For our work with Read to Kids we tested the use of mobile phones to get parents to read to kids — but through the piloting process used the administrative data generated by the phones to track what usage looked like to make design decisions early. In other words, we didn’t have to wait to use routine monitoring data useful.

***

To conclude, we’re proud of how our work has evolved over time, how much the field has shifted in its own approach to monitoring, evaluation and learning, and we’re committed to learning, reflecting, and adapting our approach continuously. We’re already seeing ways in which our current thinking is being pushed in new directions and excited to see how data and evidence can get us closer to improved development outcomes.

Leave a Reply

Comment Guidelines

Your email address will not be published. Required fields are marked *

Global & Regional Initiatives

R4D is a globally recognized leader for designing initiatives that connect implementers, experts and funders across countries to build knowledge and get that knowledge into practice.