The editorial in last month’s Lancet reads:

“Evaluation must now become the top priority in global health. Currently, it is only an afterthought. A massive scale-up in global health investments during the past decade has not been matched by an equal commitment to evaluation. This complacency is damaging the entire global health movement. Without proper monitoring and accountability, countries and donors—and taxpayers—have no idea whether or how their investments are working. A lack of knowledge about whether aid works undermines everybody’s confidence in global health initiatives, and threatens the great progress so far made in mobilising resources and political will for health programmes in low-income and middle-income countries.”

I agree – but be careful of what you wish for: evaluation is a double-edge sword. When good evaluations are done and show evidence of impact it can be one of the most powerful tools to advocate for further investment and program rollout. When evaluations are done that show less than desirable impact than it can mean the end of such initiatives. It is therefore not particularly surprising that many global health initiatives have not made commensurate investments in evaluation – in particular the high profile initiatives that have emerged over the past decade.

Evaluation benefits those programs where our prior belief of effectiveness is lower than its true level of effectiveness. High profile initiatives such as the Global Fund and PEPFAR have been sailing high, enjoying incredible popularity. In short, they have the most to lose from evaluation.

Case in point of this phenomenon are the results of an evaluation also published in last month’s Lancet of the Accelerated Child Survival and Development (ACSD) program that was retroactively evaluated by researchers at Johns Hopkins University. The above editorial reports that the results of this evaluation were “mixed” – perhaps this is being too kind. One could argue that the results of the evaluation have demonstrated that the program was a complete failure.

The basic message from this evaluation is that the ACSD program – which previously had been heralded as a great success – likely had little or no impact on focus districts in at least 3 countries in West Africa. The ACSD program’s goal was to rapidly scale up three basic benefits packages thought to be effective against major childhood killers (IMCI+, ANC+, ad EPI+) in 11 countries in West and Central Africa from 2001-2005. Running on what would be considered a barebones budget in today’s terms, the program focused on getting these packages into the communities of focus districts through the use of community workers and other investments in primary health infrastructure. The goal was to reduce child mortality by at least 25% – a previous evaluation conducted by UNICEF suggested that part way into the program the program had already reduced mortality by 20%, although the methods employed were probably not appropriate to make such claims.

Not only did the Hopkins evaluation find no differential impact on child mortality from the ACSD in the focus districts in the 3 countries where they conducted the evaluation, if anything, they found that mortality declined more in non-focus areas. Arguably the program found higher coverage outcomes for some of the indicators in the focus regions, but it is hard to call this strong evidence. Notably, coverage of the one package that targeted the greatest share of total childhood deaths – the IMCI package – did not improve in the focus areas and actually declined in one country.

A few caveats – the evaluation suffered a number of methodological shortcomings, but my sense is that the evaluators did the best they could have done given that there was not a great deal of data available to conduct the analysis and not enough efforts had been made by program implementers to ensure that appropriate data would be available for the evaluations. Plus, the focus districts in each region were not randomly chosen, nor were they chosen using similar criteria in each country, right away setting up a number of challenges to evaluating the outcomes. So while the results of the evaluation must also be interpreted with a grain of salt, the overwhelming lack of evidence of program effectiveness should still be the key takeaway.

So what can be learned from this example? Well, there is a great deal of value in conducting evaluation of global health initiatives but one must also realize that doing more evaluation will not be a boon to all global health initiatives. Overall, it is donors and recipients that will benefit the most (and evaluation groups too!) but that there will be some winners and some losers if evaluation efforts expand. This speaks to the need to a priori establish rules and guidelines to ensure unbiased evaluations. Donors should increase the transparency of their efforts by ensuring that the global health initiatives that they support set up standard evaluation practices and that the results of all of these evaluations – not just the positive ones – are made available to the general public.

Not all evaluations will likely produce such dire outcomes, so I agree, investing in evaluation might be an important strategy to improve and sustain political will for the massive scale up of global health initiatives that has been witnessed during the past decade. Sadly, for such benefits to be seeing impact today – when they are perhaps the most needed – they should have been made years ago. But it is never too late to begin.

Share on Facebook
 

3 Responses to “The evolution of evaluation in global health”

  1. Anonymous says:

    Karen, Thanks for your comment. I agree with your basic points, and they are similar to some of the ideas that Bill Savedoff and I (along with others) put forth in the work we did on closing the "evaluation gap" (http://www.cgdev.org/section/initiatives/_active/evalgap). In particular, the idea of a registry of evaluations, as of clinical trials within medicine, is a good one and will help to prevent the bias toward revealing only positive findings.

    I'm concerned, though, about the construct of "winners" and "losers" from evaluation. I think that corresponds to a conception of global health programs that are products, getting the thumbs up or down from evaluators, like commodities do from Consumer Reports. But evaluation really should be seen not as means of providing go / no-go information, but as a way to provide decisionmakers at all levels with information about how to improve programs in future iterations. There are ways to organize evaluations to promote that way of thinking: look at design variants instead of with/without program; get a donor to fund two phases of a program on the condition that the first phase is well evaluated and the second is then designed to reflect the findings of the evaluation; and more.

    –Ruth Levine, Center for Global Development

  2. Bill Savedoff says:

    I fully agree with Ruth.

    Lant Pritchett has a great piece supporting your point about why development agents might prefer ignorance to evidence (http://ideas.repec.org/a/taf/jpolrf/v5y2002i4p251-269.html).

    This was the impetus behind setting up the Intl Initiative for Impact Evaluation (www.3ieimpact.org) so that the knowledge-building process can go on independently of the urge to (and fear of) identify winners and losers.

    –Bill Savedoff, Social Insight

  3. Barnes says:

    Personally, I see evaluation as one of the most lacking fascists of global health care. Quality of care is largely determined by the resources that the health initiative, whatever it may be, has at its disposal. Evaluation can be an extremely effective tool in terms of managing and appropriating resources.

    I do agree that if abused, evaluation can turn in to the sharpest of double edged swords.

    Case in point: http://cdn.icyou.com/topics/health-wellness/unicef-three-months-after-quake-taking-stock-haiti+

Leave a Reply



Analytics Plugin created by Web Hosting