Blog Home

Turning stories into lessons: 3 decisions while implementing Most Significant Change

In the rapidly evolving world of monitoring and evaluation, there is a growing push to move beyond conventional evaluation toward more participatory, complexity-aware design. One such approach is a participatory monitoring and evaluation method called Most Significant Change (MSC) – a story-based approach in which a variety of program stakeholders share, discuss and select stories of significant change resulting from program implementation, aiming to better understand and strengthen key program impacts. While MSC holds huge potential to more effectively measure and learn about programs in dynamic contexts, there is still much to learn about when and how best to use it.

R4D’s Evaluation and Adaptive Learning practice recently used MSC as part of a learning engagement with the USAID-funded Illuminating New Solutions and Programmatic Innovations for Resilient Spaces (INSPIRES) program. The team’s evaluation and learning activities with INSPIRES included assessing the implementation and outcomes of PartnersGlobal’s ResiliencyPlus (R+) Framework, a 12-month capacity strengthening intervention that provides support for civil society organizations (CSOs) to improve their organizational resilience. Through R+, CSOs complete the Resilient Organization in Changing Civic Space (ROCCS) assessment to help identify their organizational strengths and vulnerabilities and then work with a team of coaches to design and implement strategies to address those vulnerabilities.

By using MSC to assess the R+ program, we aimed to gain a deeper understanding of what changes program stakeholders perceived as most important for improved organizational resilience and to draw out lessons to strengthen future programming in pursuit of those changes. Our MSC process (shown below) had multiple steps and was iterative in nature, centering on its participant-driven approach.

In this blog, we highlight three key decision points we wrestled with in our application of Most Significant Change and what we learned in the process, which we share in more detail in this learning brief.

1. Is MSC a good fit for our evaluation and learning needs?

MSC is not always appropriate for evaluating a given program – and deciding whether to use it should be a collective decision among program stakeholders. After quite a bit of reflection with PartnersGlobal (the program implementer for R+) and USAID (the R+ program’s funder), we ultimately decided it was the right fit for our evaluation and learning needs due to the following enabling conditions:

Our R4D evaluation team had already been serving as a learning partner for PartnersGlobal and the R+ program for a few years, and we still planned to work together for a few more years to come under INSPIRES. Thus, we had the time and relationships with key stakeholders to establish buy-in and tailor the method to their needs.

Because MSC is a highly participatory method that requires time and input from many different stakeholders, having their buy-in from the start is key. Evaluators also need to have the flexibility, capacity and time to tailor the multi-step MSC process to the program, build in the various components, and lead the execution.

Through INSPIRES, we acknowledged the difficulty of conceptualizing and measuring organizational resilience as an outcome, especially across diverse and rapidly changing civil society contexts. We wanted to use a method that could be helpful in evaluating a program like R+ with complex, hard-to-predict outcomes.

MSC is most valuable when evaluating programs in complex settings or with complex outcomes. It is most useful when program outcomes are hard to predict with certainty, when there may be disagreement on which outcomes are the most important, or when outcomes are expected to vary widely.

We recognize that MSC is not designed to provide comprehensive results about the changes a program elicits or to identify the most typical change, but rather to tease out the most significant, unexpected, diverse, indirect or intangible changes. We decided to use MSC as only one of several evaluation methods to facilitate learning among program stakeholders, so we were comfortable with the fact that it wouldn’t provide comprehensive results.

While MSC can be applied in a rigorous way in which we can triangulate responses across stakeholders and validate outcomes, it is not typically used as a stand-alone method.

2. Who should write the significant change stories we collect?

There are a variety of ways in which significant change stories can be collected. Program implementers working closely with program participants could informally document stories that come up over the course of the program. External evaluators could collect stories of change via structured interviews or focus group discussions. Or program participants could write their own stories.

We combined a few of these approaches. We decided to first collect stories directly from program participants using a simple and open-ended story collection template that asked participants to respond to three key questions about changes aligned to a set of pre-defined themes of changes (also known as Domains of Change) that may be expected to occur from the R+ program.

  1. Please describe the change in 1-2 sentences. What happened?
  2. Please describe the significance of this change in 1-2 sentences. Why was this change significant to you and/or your organization?
  3. Please describe R+’s contribution to this change in 1-2 sentences. How and why do you think R+ helped you/your organization achieve this change, if at all?

We encouraged CSOs to have multiple staff contribute to these stories to enable participants at different levels of an organization to share their perspectives. Organization staff also had the option to send us written or verbal stories (e.g., WhatsApp voice messages) in their language of choice. Offering options for how organizations could share their story was particularly valuable for ensuring we received the storyteller’s genuine perspective! Most organizations ended up sharing stories in written form — although there were a few who chose to share verbally — and nearly half shared their story in a language other than English.

By leaving the initial story crafting process in the hands of program participants, they had the freedom to respond in their original voice, interpret the Domains of Change as they saw fit, and take the time and space to reflect on program experiences and outcomes without the pressure of a formal interview.

But because we were using MSC for evaluation purposes, rigor was still important. Following initial story collection, we then verified the reported stories through document collection, interviews and surveys with program implementers. This verification process enabled us to triangulate reported changes and outcomes across different program stakeholders, develop a more nuanced understanding of participants’ experiences, and standardize the stories for discussion — all while maintaining as much of participants’ voice as possible.

Story verification is important but requires a decent amount of tracking people down. Build in ample time for this in your workplan and establish guidelines upfront for if/when you might drop stories that can’t be validated (e.g. because stakeholders didn’t agree on what happened or someone never responded to you for an interview).

3. What do I do with all these significant change stories?

Arguably the most important part of MSC is what happens after stories have been collected, as this is where the greatest opportunity for program learning lies.

The first step was to facilitate a collaborative discussion on the final set of stories, with the goal of reaching consensus on which stories of change were most significant within each Domain of Change. Since R4D’s learning support through MSC was intended to guide future decisions about the design and implementation of R+ programming, we decided to engage directly with the people who would be making those decisions — the PartnersGlobal implementing team and the R+ coaching team. And they each brought a different and important perspective on the program. PartnersGlobal had a bird’s-eye view of programming across multiple countries and cohorts of CSOs, while coaches brought specific contextual knowledge and experience working closely with CSO program participants.

Facilitating story selection both within and across these two groups of stakeholders proved crucial as each brought their own unique biases, value judgements and reflections on what types of change were most significant in the context of the program’s overall intended impact and implications of that for future programming.

Some of the key program learnings and recommendations that emerged out of this discussion included:

  1. Change in awareness or mindset was a valid outcome unto itself.
  2. It was not yet clear whether organizations should aim to use the R+ process to build on their strengths or address their greatest vulnerabilities. Both strategies proved valuable depending on the organization’s specific context.
  3. PartnersGlobal should continue to explore opportunities to provide R+ support to both smaller, nascent organizations as well as those that are more well-established. Both derived value and were able to improve their organizational resilience through the program.

One additional reflection that surfaced during the story selection process was that all stories are significant in some way. As one R+ coach explained, “Considering cultural context, it’s hard to place value on certain changes over others. What is significant to one organization may be different from another.” As such, we decided to share all stories (anonymized and with consent) as well as subsequent learnings and recommendations with all program stakeholders, including USAID and the CSOs themselves. We did this through various touch points, including participatory learning sessions and development of an external learning brief.

This additional layer of partner engagement and process documentation does require time; however, it helps ensure the MSC process meaningfully contributes to program adaptation and that all stakeholders understand the connection between the stories they shared and how that data was used.

Looking forward

Whether you are new to MSC or a seasoned expert, we hope the experiences from our process are useful. This is how we applied MSC to fit our partners’ learning needs, but there are endless ways to adapt this flexible and iterative methodology – and MSC is just one tool in a larger toolbox of evaluation and adaptive learning methods. To see what other approaches are in our toolbox, check out our other blogs here.

Image credit: All illustrations in this blog post come from Rick Davies’ and Jess Dart’s The ‘Most Significant Change’ (MSC) Technique: A guide to its use.

Leave a Reply

Comment Guidelines

Your email address will not be published. Required fields are marked *

Global & Regional Initiatives to Catalyze Stronger Systems

R4D designs and leads global and regional initiatives that connect local leaders and their partners to promote local agendas and achieve locally led results.