The editorial in last month’s Lancet reads:
“Evaluation must now become the top priority in global health. Currently, it is only an afterthought. A massive scale-up in global health investments during the past decade has not been matched by an equal commitment to evaluation. This complacency is damaging the entire global health movement. Without proper monitoring and accountability, countries and donors—and taxpayers—have no idea whether or how their investments are working. A lack of knowledge about whether aid works undermines everybody’s confidence in global health initiatives, and threatens the great progress so far made in mobilising resources and political will for health programmes in low-income and middle-income countries.”
I agree – but be careful of what you wish for: evaluation is a double-edge sword. When good evaluations are done and show evidence of impact it can be one of the most powerful tools to advocate for further investment and program rollout. When evaluations are done that show less than desirable impact than it can mean the end of such initiatives. It is therefore not particularly surprising that many global health initiatives have not made commensurate investments in evaluation – in particular the high profile initiatives that have emerged over the past decade.
Evaluation benefits those programs where our prior belief of effectiveness is lower than its true level of effectiveness. High profile initiatives such as the Global Fund and PEPFAR have been sailing high, enjoying incredible popularity. In short, they have the most to lose from evaluation.
Case in point of this phenomenon are the results of an evaluation also published in last month’s Lancet of the Accelerated Child Survival and Development (ACSD) program that was retroactively evaluated by researchers at Johns Hopkins University. The above editorial reports that the results of this evaluation were “mixed” – perhaps this is being too kind. One could argue that the results of the evaluation have demonstrated that the program was a complete failure.
The basic message from this evaluation is that the ACSD program – which previously had been heralded as a great success – likely had little or no impact on focus districts in at least 3 countries in West Africa. The ACSD program’s goal was to rapidly scale up three basic benefits packages thought to be effective against major childhood killers (IMCI+, ANC+, ad EPI+) in 11 countries in West and Central Africa from 2001-2005. Running on what would be considered a barebones budget in today’s terms, the program focused on getting these packages into the communities of focus districts through the use of community workers and other investments in primary health infrastructure. The goal was to reduce child mortality by at least 25% – a previous evaluation conducted by UNICEF suggested that part way into the program the program had already reduced mortality by 20%, although the methods employed were probably not appropriate to make such claims.
Not only did the Hopkins evaluation find no differential impact on child mortality from the ACSD in the focus districts in the 3 countries where they conducted the evaluation, if anything, they found that mortality declined more in non-focus areas. Arguably the program found higher coverage outcomes for some of the indicators in the focus regions, but it is hard to call this strong evidence. Notably, coverage of the one package that targeted the greatest share of total childhood deaths – the IMCI package – did not improve in the focus areas and actually declined in one country.
A few caveats – the evaluation suffered a number of methodological shortcomings, but my sense is that the evaluators did the best they could have done given that there was not a great deal of data available to conduct the analysis and not enough efforts had been made by program implementers to ensure that appropriate data would be available for the evaluations. Plus, the focus districts in each region were not randomly chosen, nor were they chosen using similar criteria in each country, right away setting up a number of challenges to evaluating the outcomes. So while the results of the evaluation must also be interpreted with a grain of salt, the overwhelming lack of evidence of program effectiveness should still be the key takeaway.
So what can be learned from this example? Well, there is a great deal of value in conducting evaluation of global health initiatives but one must also realize that doing more evaluation will not be a boon to all global health initiatives. Overall, it is donors and recipients that will benefit the most (and evaluation groups too!) but that there will be some winners and some losers if evaluation efforts expand. This speaks to the need to a priori establish rules and guidelines to ensure unbiased evaluations. Donors should increase the transparency of their efforts by ensuring that the global health initiatives that they support set up standard evaluation practices and that the results of all of these evaluations – not just the positive ones – are made available to the general public.
Not all evaluations will likely produce such dire outcomes, so I agree, investing in evaluation might be an important strategy to improve and sustain political will for the massive scale up of global health initiatives that has been witnessed during the past decade. Sadly, for such benefits to be seeing impact today – when they are perhaps the most needed – they should have been made years ago. But it is never too late to begin.Share on Facebook