At an event marking five years since the release of USAID’s Evaluation Policy, USAID Administrator Gayle Smith noted, “Development is aspirational, but it’s also a discipline.” I couldn’t agree more.
As a researcher and practitioner, I approach development with a scientist’s eye: I draw on the best available evidence and carefully measure the impact of our programs to better serve our beneficiaries and maximize our limited funds. Together, let’s examine how USAID is learning from our experience and investing in rigorous impact evaluations in partnership with local stakeholders.
But aren’t impact evaluations difficult and expensive? Don’t they take years to show results? Is it ethical to “withhold” benefits from people in order to run a scientific experiment? I hear these questions often. For the past three years, I have managed a portfolio of eight land and resource governance impact evaluations across sub-Saharan Africa. Here are some common misconceptions and lessons I have learned from one impact evaluation in Zambia:
Myth #1: Impact evaluations are too expensive.
Although impact evaluations do cost more than the typical performance evaluation, when you consider that a rigorous impact evaluation could significantly improve the results of a $15 million project and also inform USAID’s global portfolio and the sector at large, then investing in a $1 million impact evaluation becomes more cost-effective in the long run.
There are also cost savings that start at the baseline — even before the program starts. In Zambia, our program implementer used the baseline evaluation data to develop village summaries with statistics on landholdings, population, livelihoods and land conflicts. These village summaries helped staff better understand the local context and how to target their assistance — essentially, the evaluation baseline provided a detailed needs assessment.
Myth #2: Impact evaluations take too long.
It is true that from baseline to endline, impact evaluations often take years to complete, compared to a few months for a typical performance evaluation. In my sector, changes in governance happen slowly, but we can actually already learn a lot from the baseline, even before the program starts.
We are using our baseline data in Zambia to test our underlying program assumptions in real time. For example, we found that despite not having documentation of their land rights, farmers feel their rights are fairly secure from expropriation. We also find farmers tend to invest less in labor-intensive practices, like live fencing, on fields where they feel their rights are less secure. We are sharing these findings with our colleagues and partners and using them to adapt our theories of change.
Myth #3: Randomization isn’t realistic.
Impact evaluations compare two groups over time: one that gets the intervention (treatment) and another does not (control). In 2013, I stood before four chiefs in eastern Zambia to explain impact evaluations and why USAID wanted their permission to use a “lottery” to decide which villages we would support.
I knew this was going to be a tough sell, but the chiefs agreed that randomized selection was the fairest way to select who receives benefits from our limited resources. Randomization doesn’t work for everything, but it can work, even with complex governance programs.
Myth #4: Impact evaluations aren’t fair.
Last month, I met with those same four chiefs in Zambia to review our progress. They are eager to register land rights in the control villages because they think this helps reduce land conflicts. While this is an outcome we hope to achieve, we do not yet have conclusive evidence that conflicts have been reduced, or that it was our program that led to this effect. To truly help, we must first understand that our approach works before we scale up, and the chiefs agreed to wait until 2018, after the endline, to work in the control villages.
Myth #5: USAID can’t be involved in evaluating our own programs.
Independent evaluations increase accountability and avoid bias. But USAID staff (and our implementing partners) can (and should!) be involved to leverage the diverse expertise necessary for a good impact evaluation design.
At USAID, we also need to facilitate coordination across programming and evaluation. When implementation and evaluation objectives do not align, we (USAID) need to help find the best solution. When our evaluation in Zambia reached an impasse on the right level (chiefdom/ village/ household) for randomizing land registration, I helped reach consensus that it should be at the village level, since village headmen traditionally allocate land rights.
This kind of coordination and technical guidance requires more work than “outsourcing” the evaluation, but it also helps ensure we find the right balance between learning and implementation and that we maximize the effectiveness of our programs over time.
I hope this post has sparked some ideas and encourage you to consider how you can help build a more rigorous evidence base on what works in your discipline.