Taking a more rigorous approach to evaluation

Evaluation researcher Barbara Befani explains how a different methodological approach helped IIED evaluate whether the Uganda Poverty and Conservation Learning Group had influenced policy.

Article, 18 May 2017

Barbara Befani advised IIED on the methodology on the Uganda Poverty and Conservation Learning Group evaluationEvaluation can provide valuable insights into what works and what doesn't, but in some areas of work such as policy influence, getting clear evidence can be hard.

Barbara Befani explains how IIED used a different approach to evaluate whether the Uganda Poverty and Conservation Learning Group (UPCLG) successfully influenced the policy of the Uganda Wildlife Authority regarding the share of the mountain gorilla tourist permit fee received by the local community.

Q: Why is it difficult to evaluate policy influence?

BB: I don't think the argument is that it's difficult to evaluate policy influence, but it's difficult to evaluate policy influence in a rigorous way, or at least to do this using some of the methods that are currently considered rigorous. It's difficult because the outcome is not a quantitative indicator, like the number of children in primary school for example. It's about whether something happened or not, and then it's about understanding the qualitative aspects of what happened.

And there is a second problem that even if you find ways to measure the outcome, you can't use control groups, so you can't use counter-factual analysis. Policy processes are complex and they happen over time. So it's very difficult to find a control case that is identical to your case, except that you were not there to try and influence the process. 

Q: How do you overcome that difficulty?

BB: You need to use other methods. Essentially the way to overcome this is to think about causality in a different way. Instead of asking very specific causal questions about net effect, instead of asking which part of the outcome can be solely attributed to the intervention, we need to phrase the causal question in a different way by constructing a hypothesis that describes how the outcome was achieved. 

How was the policy influenced by the intervention? How did the process unfold? You can think of a number of alternative processes, some more successful, some less successful… and then you look for empirical evidence for these various hypotheses [or contribution claims] to have an idea as to which one is most strongly supported by the evidence.

Q: How did you apply this in the evaluation of the Uganda Poverty and Conservation Learning Group?

BB: The idea was to evaluate the activity of this network. The network thought that this [the change in the share of the gorilla tourist permit fee given to the community] was a good case, an example of good practice.

So the idea was, let's apply this new method to this case and let's see if we have empirical support for the idea that it was a good case. Or was it just a subjective impression that could be biased by a number of things, by the fact that we anticipated this outcome, that we believed it was going to work, for example? 

At the beginning, we started with two very different, relatively simple hypotheses as to whether or not the network had influenced the decision to increase the amount of the gorilla tourist permit fee that went to the local community. These hypotheses were refined or rejected as we collected and assessed empirical evidence. 

The methodology is called process tracing, but we combined it with Bayesian updating (a way of measuring and updating levels of confidence in a claim depending on the available evidence) to make it more rigorous while still fitting well with the research question. 

One of the good things about process tracing is that it allows you to see how different factors interact in the mechanism: what happens first, what happens later, what were the contextual conditions, was there a window of opportunity that somebody took advantage of? This fits very well with the Uganda case, because we learnt very early on that it wasn't just the magic of the network that had produced the policy change.

Bwindi Impenetrable National Park in Southwest Uganda (Photo: Mariel Harrison)Bwindi Impenetrable National Park in southwest Uganda (Photo: Mariel Harrison)

Q: Was the network's success in securing the policy change a result of a combination of factors?

BB: There had been some pressure from the community, and there were also some specific conditions that were thought to have facilitated the change. For example, a member of the network was also a member of the Uganda Wildlife Authority, and community pressure was particularly high because the gorilla fee had just been raised. At some point the authority became worried because research by the network showed that there were illegal activities taking place in the wildlife park that were linked to the discontent related to revenue sharing.

So, if you look at the description of the process, it makes you realise that the Uganda Wildlife Authority didn't just [increase the fee] because the network suggested it. It was a more complex process. And this is what process tracing does – it allows you to understand with relatively high precision the role of the intervention under those specific circumstances. 

So, we are not just answering whether the network had any influence, but also how it influenced the decision, and for me, the 'how' question is a much more sophisticated way to answer the 'whether' question.

Q: If process tracing allows you to unpick how something happened, what is the role of Bayesian updating?

BB: In process tracing, the idea is to try and assess the strength of evidence for various claims, and there are four categories to classify evidence into. But sometimes it's difficult to understand which piece of evidence fits into which category. If you do Bayesian updating, you solve this problem of categorisation, or at least make the categorisation process much more transparent and consistent.

You can also easily consider different pieces of evidence together, as in a package, and calculate the probative value (the strength) of each one as well as of the whole package.

The community no longer accepts methodologically weak evaluations. Evaluations are no longer just an internal process for institutions. They are useful for accountability to taxpayers and learning for several external stakeholders.

In the Uganda case, our confidence was not very high for some components of the claim because we could not talk to people who might have been able to give us stronger evidence. And that influenced the confidence in the overall claim – that the network influenced the decision – because the overall claim is influenced by the confidence in the different components.

Thanks to Bayesian updating we were able to describe every micro-step of the process that led us to estimate this confidence very clearly, measuring significance with numbers, and making the whole process replicable, which is a pre-condition for obtaining reliable (or "rigorous" and "robust") findings. 

Q: Why is this important?

BB: We are doing this because there is a debate around rigorous methods. The community no longer accepts methodologically weak evaluations. Evaluations are no longer just an internal process for institutions. They are useful for accountability to taxpayers and learning for several external stakeholders.

Our findings need to be able to withstand challenges and 'attacks' from a broad range of groups. So for qualitative methods, this adds rigour and clarity as to how the findings and outcomes are obtained.

Barbara Befani (Befani@gmail.com) is research fellow at the University of Surrey, research associate at the University of East Anglia and an independent consultant. She was the methodological advisor on the Uganda Poverty and Conservation Learning Group evaluation. 

Resources

IIED has published a briefing paper about the evaluation methodology and a full report about the impact evaluation of the Uganda Poverty and Conservation Group's work.

Stefano D'Errico, IIED's monitoring, evaluation and learning lead, also gave a presentation on 'Clearing the fog of impact claims: contribution tracing to assess research influence' that referenced this work at a Association of Commonwealth Universities event in London in July 2017. 

Contact

To find out more about the evaluation methodology, contact:

Stefano D'Errico (stefano.derrico@iied.org), monitoring, evaluation, accountability and learning manager, IIED's Strategy and Learning Group

Francesca Booker (francesca.booker@iied.org), researcher, Natural Resources research group