Reinventing community scorecards
[Editor’s Note: This blog post is the first in a series highlighting key lessons from the new paper — Citizen Voices, Community Solutions: Designing Better Transparency and Accountability Approaches to Improve Health — from the Transparency for Development Project. Transparency for Development is a joint project of the Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation and Results for Development that seeks to answer the questions of whether community-led transparency and accountability can improve health — and in what contexts. The new paper shares the underlying design principles, process and lessons from co-designing a new version of the traditional community scorecard. We are now testing the updated scorecard to gauge its impact on health outcomes.]
Many global development experts see community scorecards as a tool to improve health and education services. Others, however, view them as stumbling blocks that give people a false sense that information is enough — when we know it is not.
With these mixed reviews of scorecards in mind, we set out to give the traditional scorecard a makeover to fix its flaws and make it shine.
We learned a lot from the iterative process of designing (and redesigning) the scorecard. And we think others may benefit from knowing what we got right — and what we got wrong — which we’ve documented in our new paper, Citizen Voices, Community Solutions.
What are the key takeaways for practitioners? At least two very actionable ideas we hope will strengthen community-led accountability for health and education worldwide:
1. Does your scorecard do this? A checklist to guide your efforts.
We spent a lot of time early in this project trying to figure out why scorecards seem to work in some places and some studies — and not in others. And we came up with a few possible explanations.
Maybe scorecards don’t give communities the freedom to work on health problems they think are the most critical. Maybe scorecards force people to tackle problems one way (say, with an in-person meeting) when the right action for their context is something totally different. Maybe the expectation that civil society organizations (CSOs) or donors will come in and solve problems for them with outside resources actually disincentivizes community-led action.
If any of these possibilities are true, they’re also fixable. So, we set out to develop a set of design principles to avoid common pitfalls. We will be tackling the details of these principles in a subsequent blog post, but they can be summarized in the following checklist of questions, which CSOs and others can ask themselves as they design their new scorecard. We don’t know if the “right” answer to all of these questions is no; but thinking through these important design questions can provide some practical food for thought for CSOs and other practitioners who want to consider whether to revisit their design process — and the resulting scorecard.
- Does the scorecard provide information on the big picture problem (like maternal health or nutrition outcomes) rather than a specific service delivery breakdown?
- Does the scorecard focus on a problem that is locally relevant?
- Does the scorecard process provide adequate space and responsibility for the community to drive the process — regarding what problems to tackle and how to tackle them?
- Is the action in the scorecard non-prescriptive? In other words, does it let communities use their own knowledge of structures and dynamics and context to decide how to tackle problems?
- Does the scorecard require outside resources (like funding from a CSO or relationships that a national organization has) to work?
- Is the scorecard designed or co-designed with in-country partners? This is a core question for those of us who work on social accountability from offices in Washington, D.C., or elsewhere.
2. A new way to design — rapid, iterative testing
When it comes to designing and implementing scorecards, there are a lot of decisions that need to be made. And things that may seem small — like the design of the scorecard or the timing of community meetings — can make the difference between the success or failure of an intervention.
You may have a guess — or anecdotal evidence — that one option may be better than another. But these are the types of questions that can be answered quickly, in small pilots or even “pre-pilots,” using rapid data collection on proximal outcomes or outputs.
For example, we learned a lot about what type of environments might lead to more participation by women from doing mock meetings in a few different settings. Within a couple of hours, we had valuable insights into the different ways that women tended to act or speak (or not) when different types of people were in the room with them.
These iterative piloting processes have actionable steps that we discuss in more detail in our next blog post — and in the paper.
***
So, what is the right design for a scorecard? And what about the answers to questions on participation and presenting and gathering information? These questions, and more, are addressed in the paper. None of the answers come from a large-scale randomized controlled trial (we are using that for the big impact questions), but we do have insights into what happened when we tested different types of designs in our pilot and pre-pilots.
Check out the new paper, Citizen Voices, Community Solutions, and let us know (comment below) what lessons you have about designing a better scorecard!
Photo: Kelvin Chirwa of the Malawi Economic Justice Network leads a pilot intervention meeting in Nsanje District, Malawi, for the Transparency for Development project. © Results for Development