Reflections on evaluating advocacy

Catherine Lalonde is the senior program officer for the Francophone Africa program.

Saving the lives of women and children around the world is a team effort. It takes the voices of community and religious leaders, health professionals, concerned citizens, young people, and impassioned activists to effect change. Prioritizing women’s and children’s health requires sustained advocacy.

Yet, determining whether certain advocacy efforts are actually achieving desired results—evaluating an advocacy program—is challenging. Through the evaluation of our Mobilizing Advocates from Civil Society (MACS) project, which brings together civil society organizations and equips them with skills to be effective advocates, we are reflecting on what it means to evaluate advocacy.

Advocacy is a slow build, one that requires patience, consistent collaboration among partners, strong messaging that is adaptable to an ever-changing context, persistent door-knocking, and painstaking relationship-building with key decision-makers. Unlike other health interventions (e.g., vaccination programs), there are no pre-defined, well-tested advocacy strategies that will work in every case or lead to a guaranteed result. Advocates have to be adaptable, creative, and constantly ready to readjust a strategy.

Because multiple actors might be working on the same issue, the link between one particular advocacy action and a policy outcome might not be clear. Also, political decisions aren’t linear: new governments might change or repeal policy decisions or have differing perspectives on the issues. The political environment and the actors within it are constantly changing. As new government officials enter office, advocates might need to build new relationships from scratch.

Evaluation—when done thoughtfully—is just as complicated as advocacy. Evaluation is about learning; it requires an in-depth investigation of a project to determine what’s working and what’s not. Too often, evaluations focus on process, looking exclusively at the number of workshops conducted or the number of tools produced, but true evaluation looks at what was actually achieved through those workshops and tools.

Sometimes, evaluation seems scary because a critical look at a project could uncover “failures,” especially when trying to make sense of intangible advocacy efforts. We often have a black-and-white definition of success and failure—success means that you achieved the desired policy change, and failure means you did not. But given the complex nature of political decision-making and the necessity to adapt advocacy strategies to ever-changing contexts, this simplistic view doesn’t recognize the actual impact an advocacy program—whether independently or in combination with external efforts—might have had. Evaluation provides an opportunity to reflect on what the project did achieve, even if it didn’t reach the exact policy objective established at the outset.

For instance, the original objectives of our MACS project in Burkina Faso were to form an alliance of RMNCH organizations and build its capacity to advocate collaboratively and effectively to achieve concrete policy goals. Initially, the alliance members set a policy objective to increase the RMNCH budget by 25%. After alliance members began meeting with government officials, they discovered that they had an incomplete understanding of the way Burkina’s health budget works, and that this original policy objective was not realistic. Other important steps were needed: getting access to budget information, learning how to analyze it, and finding out which government officials decided on the budget numbers and understanding the process that they used. Budget and spending transparency—through a published budget—became the alliance’s new policy objective.

MACS alliance colleagues work together (Photo by Catherine Lalonde)
MACS alliance colleagues work together on strategies for achieving their budget objectives. (Photo by Catherine Lalonde)

This change in advocacy strategy doesn’t mean that the strategy failed. The overall aim was to foster effective advocacy; in order to do that, we had to listen and understand how to work within the context of post-dictatorship, post-civil-uprising Burkina Faso. We have achieved a cohesive RMNCH alliance, we know much more about the health budget than we did before, and we are on the path to seeing that the government increases the RMNCH budget line.

Our evaluation will capture the richness of this story as well as the many others from the advocates who have worked in MACS alliances. The lessons we learn will make the MACS project even better, and will enrich and strengthen the future work of FCI and our partners as we continue advocating for the lives, health, and rights of women, newborns, and children.

Follow our blog for updates on the MACS project and the results of our evaluation.


Please follow and like us:
Visit Us

Leave a Reply

Your email address will not be published. Required fields are marked *