I did it – it took me 4 days – but I did it: I read a whole economics research paper, looked at the tables, and even understood (most of) it. My baby brain is apparently starting to heal…
What was it that finally brought me out of my maternal stupor? It was the title of a new NBER working paper recently released by Nava Ashraf (HBS), Gunther Fink (HSPH) and David Weil (Brown) entitled “Evaluating the Effects of Large Scale Health Interventions in Developing Countries: The Zambian Malaria Initiative“. Wow – an economic evaluation of the health impacts of a national health program!?! But the more I read, the more I realized that the article was not going to provide the kind of findings I craved – precise estimates of the health impact of such programs or insights into which health interventions were more effective than others in controlling malaria. Instead, the paper was one of the more interesting pieces I have read that outlines the challenges of undertaking such evaluations in the real world.
In 2003, the government of Zambia launched one of the most ambitious malaria control programs launched in a modern day developing country. Donors had invested heavily in Zambia because it was believed that malaria control programs were likely to succeed in this country. Typical to such initiatives, it seems, evaluation was an afterthought and as such mechanisms were not put in place to ensure adequate data availability to evaluate the impact of the programs. Although there was a national health management information system (HMIS) as well as standard demographic and health surveys (DHS) – it took a bunch of economist, and a bunch of someone else’s money – to get enough of the right people to collect and clean existing data sufficiently to even attempt such an evaluation.
To give you some sense of what the authors were up against, their discussion of the data in the working paper – something many economists only give minimum attention to – stretches out almost 10 pages – that is a lot. The challenges were multifold: missing and incomplete reporting from health facilities, inconsistent reporting structures over time, lack of systematic verification processes, major inconsistencies in reported data, inconsistent metrics, and so on. But also a number of additional challenges: how did the rollout of diagnostic tests alter the diagnosis of malaria (before everything was fever and now cases were being distinguished), how did the roll out of other health programs affect malaria outcomes, and also how did a major user fee policy change affect the utilization of health services? These are major challenges, some of which the authors tried to address while others were almost impossible to fully address.
Their noble effort provides some evidence that there is an association between the rollout of the malaria programs and improvements in under 5 mortality, with the bed nets association being more robust than those for the other malaria interventions (although one should not make too much of this relative finding given that they were looking at very different things in different areas). However, it is very difficult to attribute too much of these health improvements to the malaria interventions alone – the authors were unable to control for the rollout of the many other important health improvement efforts which likely were correlated with the rollout and uptake of these other health interventions either because some areas in a country are frequently prioritized over others, good health management at the subnational level could lead to some areas excelling along many dimensions, and because Zambia was one the the countries that has most effectively experimented with integrated delivery of health services such as ITNs and measles (for example see here).
I think the authors should be commended for their valiant efforts to make use of existing data systems, including national HMIS datasets, to conduct such an evaluation. I suspect most people would have given up when they had seen how bad the data were and moved on to the next question. Alternatively, others would have tried to set up their own parallel data systems rather than investing and using existing data sources. While I think the evaluation does provide some evidence that the malaria programs have contributed to declines in mortality in some way, the real value of the working paper is how it shows how challenging it will be to ever be able to disentangle the causal mechanism behind any of the health impact of many dozens of large scale national health programs currently underway. I think others interested in looking into such questions should see this paper as a warning that it may not be easy, but that it is worth investigating existing data systems to see what is available already.Share on Facebook