None - This image is in the public domain and ...Image via Wikipedia

I did it – it took me 4 days – but I did it:  I read a whole economics research paper, looked at the tables, and even understood (most of) it.  My baby brain is apparently starting to heal…

What was it that finally brought me out of my maternal stupor?  It was the title of a new NBER working paper recently released by Nava Ashraf (HBS), Gunther Fink (HSPH) and David Weil (Brown) entitled “Evaluating the Effects of Large Scale Health Interventions in Developing Countries: The Zambian Malaria Initiative“.  Wow – an economic evaluation of the health impacts of a national health program!?!  But the more I read, the more I realized that the article was not going to provide the kind of findings I craved – precise estimates of the health impact of such programs or insights into which health interventions were more effective than others in controlling malaria.  Instead, the paper was one of the more interesting pieces I have read that outlines the challenges of undertaking such evaluations in the real world.

In 2003, the government of Zambia launched one of the most ambitious malaria control programs launched in a modern day developing country.  Donors had invested heavily in Zambia because it was believed that malaria control programs were likely to succeed in this country.  Typical to such initiatives, it seems, evaluation was an afterthought and as such mechanisms were not put in place to ensure adequate data availability to evaluate the impact of the programs.  Although there was a national health management information system (HMIS) as well as standard demographic and health surveys (DHS) – it took a bunch of economist, and a bunch of someone else’s money – to get enough of the right people to collect and clean existing data sufficiently to even attempt such an evaluation.

To give you some sense of what the authors were up against, their discussion of the data in the working paper – something many economists only give minimum attention to – stretches out almost 10 pages – that is a lot.  The challenges were multifold: missing and incomplete reporting from health facilities, inconsistent reporting structures over time, lack of systematic verification processes, major inconsistencies in reported data, inconsistent metrics, and so on.  But also a number of additional challenges: how did the rollout of diagnostic tests alter the diagnosis of malaria (before everything was fever and now cases were being distinguished), how did the roll out of other health programs affect malaria outcomes, and also how did a major user fee policy change affect the utilization of health services?  These are major challenges, some of which the authors tried to address while others were almost impossible to fully address.

Their noble effort provides some evidence that there is an association between the rollout of the malaria programs and improvements in under 5 mortality, with the bed nets association being more robust than those for the other malaria interventions (although one should not make too much of this relative finding given that they were looking at very different things in different areas).  However, it is very difficult to attribute too much of these health improvements to the malaria interventions alone – the authors were unable to control for the rollout of the many other important health improvement efforts which likely were correlated with the rollout and uptake of these other health interventions either because some areas in a country are frequently prioritized over others, good health management at the subnational level could lead to some areas excelling along many dimensions, and because Zambia was one the the countries that has most effectively experimented with integrated delivery of health services such as ITNs and measles (for example see here).

I think the authors should be commended for their valiant efforts to make use of existing data systems, including national HMIS datasets, to conduct such an evaluation.  I suspect most people would have given up when they had seen how bad the data were and moved on to the next question.  Alternatively, others would have tried to set up their own parallel data systems rather than investing and using existing data sources.  While I think the evaluation does provide some evidence that the malaria programs  have contributed to declines in mortality in some way, the real value of the working paper is how it shows how challenging it will be to ever be able to disentangle the causal mechanism behind any of the health impact of many dozens of large scale national health programs currently underway.  I think others interested in looking into such questions should see this paper as a warning that it may not be easy, but that it is worth investigating existing data systems to see what is available already.

Share on Facebook
 

2 Responses to “Evaluating the impact of the Zambian Malaria Initiative”

  1. Nandini says:

    Karen, despite your baby brain, you did a fine job of describing a valuable message from this paper about data availability and quality, and the challenges of conducting evaluations of large scale programs when existing data isn't the best. I think you are abs. right–people give up when they find poor data and don't make an effort to try to work with what they have OR some use poor quality data and don't describe the limitations of these data too well, so results aren't easily understood. What these researchers plodded through and documented will indeed be valuable lessons for those intrepid souls who dare to evaluate large national health programs. There is also a strong message (based on what you've described because I haven't yet read the paper) that the investment in health system strengthening must include health information systems, so that impact evaluations with high quality data can be conducted to understand which interventions are working and why.

  2. Brian Hanley says:

    Nice summary. I would put more faith in direct field observations myself. The picture is quite muddy, and the most common intervention may not be anything done by any agency. Many villagers have learned to use gasoline rubbed on the skin, on counters, and sprayed in homes to control mosquitoes. It works to some degree, but how well? Nobody knows, and nobody knows what interaction these have with other interventions.

    Some local people have also learned that pouring oil on swamps cuts mosquitoes, a practice the USA used to use 60 years ago. Since spilled oil in the region has become an extreme problem, access to oil is not difficult. Accidental oil spills in the huge quantities that have occurred may also be one of the most effective "interventions" and another confounder for anybody trying to disentangle impacts.

Leave a Reply



Analytics Plugin created by Web Hosting