Blog Home

How to Empower People and Save Babies: 5 things that might work (and 5 that definitely won’t)

All people should have the right to have their voices heard, and all women should have the right to have healthy babies.

If you wanted to build an initiative that people can get behind, these two goals are a good place to start. It would be hard to find two less controversial statements, and the resources that go toward achieving these goals bear out how much people see them as critical for development.

What IS controversial is how to achieve these goals. Ask five different people in the field of transparency and accountability the best way to improve citizen voice and empowerment, and you are likely to get five different answers. The same is true of those working on improving maternal and newborn health.

Even more controversial is the question of whether achieving one of these goals—giving people a voice through transparency and accountability initiatives—can actually catalyze progress toward the other goal—improving maternal and newborn health.  It is a promising and attractive idea, but does it work?

As someone who has worked on transparency and accountability for a decade, I believe that the answer is yes. I have worked with civil society organizations around the world that have been able to get citizens involved in governance and then have leveraged this empowerment to improve health and education outcomes.

Beyond my own experience there is evidence that this work has led to improved outcomes (see herehere and here).

Let me rephrase that. There is evidence that this CAN work. Because if you are looking at the above experience and evidence, you also have to look at other evidence (see herehere, and here) that says transparency and accountability are not a good way to improve development outcomes.

So what is a practitioner or a donor to do with conflicting evidence?  Gather more evidence, of course.

But not just any evidence. We need evidence that looks at the impact, the mechanisms and the context—and that helps us understand not just whether transparency and accountability can improve health outcomes (spoiler alert: the answer is sometimes), but what determines whether an intervention will be successful in increasing citizen empowerment and improving health outcomes.

To work toward answering this question, we designed the Transparency for Development (T4D) initiative.

Transparency for Development (T4D) launched in 2013 with the goal of using integrated evaluation methods and local knowledge to design an intervention that could improve citizen empowerment and maternal and newborn health, while building evidence about whether the intervention works, why, and in what contexts.

What does it look like in practice? In Tanzania and Indonesia we worked with local partners to design a process that engaged and empowered the citizens of 200 communities to identify local issues related to maternal and child health and then develop an action plan that holds midwives, doctors, and elected officials accountable for quality care. Now we are focusing on answering the questions of whether it works—and why or why not.

This is a lot to answer—and (as is the downside of rigorous evaluations) we don’t have the answers yet.  Data collection is happening, but we won’t know if the intervention has worked until the women have their babies and those babies get a little older.

So why write this piece now? Because even with final results a ways off, there are lessons worth sharing.

Lesson 1: There is a “happy medium” in building context into transparency and accountability design.

Before designing the evaluation, we worked with local partners to co-design the intervention. Like most transparency and accountability interventions, this one needed to collect information about the problem, share information with community members, and encourage citizens to take action. But we wanted to improve upon existing processes, and so we started by reviewing the known knowns, the known unknowns, and (where possible) the unknown unknowns in the transparency and accountability literature.  And what did this review reveal?

What doesn’t work? There is a long list of things that could go here, but it is worth highlighting two that come up over and over again in practice.

  1. We shouldn’t use a one-size-fits-all approach to context. A single prescribed social action (for example, a meeting between civil society organizations and citizens) is not going to work in every context. A process for fostering dialogue that works well in one community may not work well in another. And it’s important to consider incentives and the relative will and reform-mindedness of different political figures—different officials are likely to be swayed by this specific type of social action to differing degrees.
  2. On the other hand, we can’t build every element of context into the design of the intervention. Beyond being implausible (how many context factors can you name in 60 seconds?), some of these context factors are likely to be only minimally important in guiding what social action is going to work in what context

What is promising? The framework that we are testing takes a Goldilocks approach to context – something not too complex, not too simple, but just right. For our interventions we developed “best-fit” approaches for five different types of communities (or five worlds) based on three critical context factors:

  • Degree of health provider competition (i.e. do citizens have real choices in where they seek care for maternal and newborn health);
  • Reform-mindedness of frontline service provider (i.e. is the doctor or midwife interested in collaborating with the community to improve health); and,
  • Reform-mindedness of local government officials (i.e. is there a local government worker interested in collaborating with the community to improve health).

Lesson 2: It’s important to find the right balance of guidance and community ownership so that community members will feel compelled to drive the process forward.

During our pilot, we tried six very different approaches for encouraging citizen participation, involvement, and commitment to social action to improve maternal and newborn health.

What doesn’t work? Approaches that are either too heavy or too light touch. At one end of the spectrum, we tested a heavy-handed approach with a facilitator from a civil society organization providing guidance and organizing meetings and follow up discussion and materials.  The result was that community participants were overwhelmed by the amount of information provided and seemed to expect that the civil society organization itself would do the heavy lifting of improving health in their community.

At the other end of the spectrum, we tested a light-touch approach, but that did not seem to create enough community buy-in or energy around taking action.

These models were tested during a pilot, not a rigorous quantitative evaluation, but these types of findings are useful in that they suggest that maybe neither of these are really the right design to get regular people engaged in community action for better health.

What is promising? After several iterations, we found an approach that seems to be leading to substantial community engagement and action. In this approach, facilitators recruit informal leaders and other people from the community to be a part of the conversation—and the action. The community members are not only attending the meetings and actively participating, but they are developing social action plans that they are carrying out and adapting.

While we wait for rigorous evaluation results, these early results are a promising sign and very informative for those working on both transparency and accountability—and maternal and newborn health work.

Lesson 3: Evaluations should be multi-disciplinary—and holistic.

Lessons 1 and 2 above are informative, but they are not yet based on rigorous evidence. To get that evidence, we had to design an evaluation—a process that provided as many lessons as the pilot itself.

What doesn’t work? Working in silos. Because we were trying to get information that could help inform a range of people—from civil society organizations and practitioners to donors—we deliberately undertook a mixed-methods design, developing research questions that looked at every step within our theory of change and research methods that would allow us to answer these questions.

We needed a randomized controlled trial, and we also needed qualitative techniques. And having the economists go to one corner and the qualitative researchers go to the other corner to work in their silos (as much as they may have preferred to) was not going to allow us to holistically answer the important questions about transparency and accountability.

What is promising? Well, not going off into our separate corners. We needed political scientists to challenge economists, economists to challenge practitioners, health people to challenge transparency people. And then we all needed to design an evaluation that would allow us to actually test the many assumptions with which each of us entered this project.

Was it messy? Yes.  But it means that our evaluation design will look at the big picture and provide more evidence than we have ever had before on what transparency, accountability, and participation can and can’t do—and why.

Lesson 4: Co-design works best when you don’t assume that you are an expert in everything—or any one thing.

I mentioned above that sometimes researchers want to go off in their corners to do their jobs.  Well, the same thing is true of practitioners.  Sometimes, we just want to be able to file through our experiences of designing and implementing interventions and say, “well, this worked in this case, so I know that this is what we should do.”

That works okay—until you get into a room with people who can say the same thing, but about totally different approaches.  And then you throw in researchers who want to talk about “the literature,” and suddenly you have 25 different solutions that 25 different people know to be the only way to fix the same exact problem.

This may be why “co-design” is so often touted as a good approach in theory but is so hard to implement in practice.  But co-designing the intervention was an important principle to the project team, so we gave it a try anyway.

What doesn’t work? Assuming that any one person or group knows everything—or even any specific thing. The co-design of our intervention took months, and part of the challenge was that we all went in assuming that each party involved in the design had a specific expertise.  We quickly learned, though, that we all had a lot to learn.  Some of the approaches to delivering information on maternal and newborn health came from organizations that had never worked on health before, and ideas for how to best encourage citizen empowerment did not always come from the community engagement experts in the group. It is cliché, but the real design work happened when all of us realized that our group was more than the sum of our parts.

What is promising? One of the most important decisions we made in this project was the selection of our local partners. Finding organizations with the capacity to facilitate community-driven transparency and accountability is a challenge in and of itself, but we also sought to co-design this transparency and accountability intervention with these organizations.

We needed partners that were strong implementers and smart citizen empowerment practitioners. To add to the challenge, our partners needed to be “learning partners.”  This meant that once the intervention and evaluation began, we all had to let it run as we had designed.  We had to be willing to learn what the evaluation taught us about the intervention—the good and the bad.

Lesson 5: Failing (and learning) early is a good investment.

The first stage of our efforts taught us a lot about … well, learning. The scaled up evaluation was designed to provide concrete and rigorous evidence, but learning does not need to only involve rigorous (and expensive) evaluations.  The co-design and piloting process of the T4D project presents an alternative for how practitioners can integrate learning into their own work.

What doesn’t work? Only using randomized control trials—or any one evaluation technique—to learn. If you think like an economist and see that there are limited resources for everything—including evaluation and learning—you have to wonder if putting all of those resources into a randomized controlled trial with 25 arms and 25 variations in intervention design is the best approach.

The problem with focusing all of one’s resources on a randomized control trial is that you have to wait years for an endline survey to tell you whether something worked.  And sometimes one can figure this out just by doing some careful observations of whether something is working in the short-term.  For example, in order for the T4D intervention to work, we needed to get community members to come to the meetings to discuss problems with maternal health and start planning solutions. If no one comes to those first few meetings (something that happened in one of our early designs), then you don’t need to wait two years for a survey to tell you that the approach is not going to help save mothers and babies.

What is promising? Iterative learning through smarter design, learning methods, and feedback loops. That is a lot of jargon for something that is pretty common sense, but it may not be the first instinct of a lot of researchers or practitioners.

We went through four rounds of pre-piloting (testing small parts of the intervention before even piloting) and piloting in Indonesia alone. That seems like a big investment of time, but what it meant was that we were able to observe and then stop and re-design our intervention when it was clear that it wasn’t working. So even though our rounds of pre-pilots and pilots cost us up to a few thousand dollars and between 1 to 12 weeks of time each, many of them saved us 3 years and hundreds of thousands of dollars invested in an intervention that we could have known was not going to work based on early performance indicators.

What’s next?

We are going to continue to learn—using everything from randomized control trials to ethnography to observation and iteration. And as we learn, we will share what is coming out of this work so it can help practitioners, researchers, and people around the world be healthier, happier, and more empowered.

For more on the Transparency for Development project, visit our website.

Courtney Tolmie (@CourtneyTolmie) is a Program Director at Results for Development Institute  and a Principal Investigator for the Transparency for Development (T4D) Initiative. T4D is led by The Ash Center for Democratic Governance and Innovation and Results for Development Institute in partnership with PattiroThe Clinton Health Access Initiative (CHAI)The Abdul Jamil Lateef Poverty Action Lab (J-PAL) Southeast AsiaThe University of Washington Evans School of Public Policy and Governance, and coordinated by the Transparency and Accountability Initiative. T4D is supported by The Bill & Melinda GatesFoundationThe Hewlett Foundation, andThe UK Department for International Development.

Leave a Reply

Comment Guidelines

Your email address will not be published. Required fields are marked *

Global & Regional Initiatives

R4D is a globally recognized leader for designing initiatives that connect implementers, experts and funders across countries to build knowledge and get that knowledge into practice.