A new study published in the journal Health Services Research and Managerial Epidemiology from authors James Marton, Jaesang Sung, and Peggy Honore questions the value of government spending on public health programs, and even finds that such spending may be detrimental to community health.

While this finding appears counterintuitive to many health policy analysts and public health advocates, it is consistent with arguments made by opponents of the Affordable Care Act’s (ACA) Prevention and Public Health Fund, which again has been targeted for elimination in the U.S. House of Representatives’ 2016 Budget Resolution and in a related bill under consideration in the House. This new study also contradicts the results from several previous studies, including a large longitudinal study I published in 2011 in Health Affairs.

Given the current political controversies and policy uncertainties surrounding this topic and the ACA Fund, it is important to take a closer look at this new study to identify reasons for discrepancies in research findings. I conclude that this study’s particular approach to measurement and analysis leads its authors to draw incorrect conclusions about the impact of public health spending on community health.

Conflicting Results Lead To Different Policy Implications

First, it is important to note that the discrepancies in findings between this study and previous research are not trivial but glaring, pointing to very different policy implications. Marton et al. focus on communities in the single state of Georgia over a 12-year period and find that “increases in public health spending lead to increases in mortality by several different causes, including early deaths and heart disease deaths.”

The same general relationship is also found when examining heart disease morbidity. The authors speculate that this finding is due to government funds “crowding out” private investment in public health programs, with the policy implication being that these programs are not wise targets for public financing.

By contrast, our earlier study followed a larger national cohort of communities over a 13-year period to find that “mortality rates fell between 1.1 percent and 6.9 percent for each 10 percent increase in local public health spending.” Our findings imply that government spending on public health programs can play a meaningful role in improving population health and reducing geographic disparities in preventable mortality.

Differences in the health outcomes measured, the time periods used, and even the communities studied are unlikely to be large enough to explain this extreme divergence in results and implications. So what is behind the divergence, and which answer is closer to the truth?

The latest study from Marton et al., like previous ones, examines the funds that flow to local governmental public health agencies — entities that implement a wide variety of health protection activities at the community level. These agencies are responsible for implementing programs to monitor community health status, investigating and controlling disease outbreaks like measles and influenza, educating the public about health risks and prevention strategies, preparing for and responding to natural disasters and other large-scale health emergencies, and enforcing laws and regulations designed to protect health such as those concerning tobacco exposure, food and water safety, and air quality.

By measuring the dollars that flow to these agencies for this work, and linking dollars to measures of community-level health outcomes like potentially preventable deaths and cases of disease, researchers attempt to estimate the health consequences attributable to local public health funding.

Different Methods And Measures To Estimate Impact

Marton et al. focus specifically on local agencies operating in the state of Georgia in order to take advantage of a unique state funding mechanism that ostensibly helps the researchers identify the causal impact of public health funding on community health outcomes. One key reason these types of studies may not be able to draw valid causal inferences is because the funding is not randomly allocated to local public health agencies.

Rather, some formula-driven federal and state public health grants allocate dollars to communities based on the prevalence of health problems and risks in those communities, so that healthier communities get fewer dollars. Other competitive grant programs funnel dollars to the communities that can develop the best proposals and the strongest rationales for their interventions, which is likely to favor better-resourced agencies and communities.

And of course local governments grant funds to these agencies from their general tax revenues, making agency funding levels contingent at least partly on local economic conditions, public values, and political dynamics. The net effects of these various public financing mechanisms on funding levels for public health agencies are complicated, variable, and uncertain.

But if funding levels, on net, are determined partly by factors that also influence community health status and risks (like disease prevalence or economic conditions), then failure to account for this endogeneity will lead studies to obtain biased estimates of the health effects attributable to public health funding.

Our earlier national study dealt with this potential endogeneity bias using an instrumental-variables study design that is now well established in the health and social services research literature. We found that local governance and administrative structures produced considerable variation in public health funding levels and had no direct influence on health outcomes, so these instruments served as an excellent stand-in for randomization in our study. The new Georgia study, by comparison, attempts to address this vulnerability by focusing on a unique funding stream that allocates state public health general grant-in-aid (GGIA) dollars to Georgia counties based on their land value and population size as measured in 1970.

Since the funding allocation parameters are based on 1970 county characteristics and have not been updated over time, the researchers argue that county health outcomes in the 21st century should not be strongly influenced by economic and demographic characteristics that existed when Nixon occupied the Oval Office. This assumption is debatable given what we know about the persistence of poverty and other social determinants of health in certain rural and urban communities.

But this assumption is not the most controversial element of Marton et al.’s study design.

Choosing What Funding Stream To Measure

The authors make a much more problematic decision to focus only on the per-capita amount of funds received by counties through the unique GGIA funding mechanism—rather than measuring public health funds received from all sources—as their primary exposure variable of interest. This choice helps to reduce the risk of endogeneity bias, but it unleashes other severe problems.

This is a problematic choice of measure because, in the year 2000, the GGIA represented a very small and declining source of funding for Georgia’s local public health agencies. The authors report that total GGIA funding statewide amounted to $66 million in 2011, representing less than $7 per capita. The authors do not report the total amount of funding received by Georgia local public health agencies, but a related data source indicates that total funding averaged $43 per capita among these agencies in 2012.

Overall, GGIA funding appears to represent a very small share of the total resources used by local agencies to produce public health activities. As a result, GGIA is likely to be a very poor proxy measure for a county’s total public health funding — particularly if the counties that receive fewer GGIA dollars face larger incentives and opportunities for securing non-GGIA sources of public health funding, and if these “low-GGIA” counties receive greater support from Georgia’s system of multi-county district public health authorities.

Even more consequential for this longitudinal study, total GGIA funding was stagnant or declining over most of the period of study, changing from $70 million in 2000 to $66 million in 2011 even without adjusting for inflation. This trend means that counties with stable or growing population sizes—presumably among the more economically vibrant and healthy communities—would have experienced declines in per capita GGIA funding.

Conversely, increases in per capita GGIA funding would have accrued primarily to counties experiencing depopulation and economic decline, as many rural and inner-city communities endured into the 21st century. Ultimately, the authors’ measure of public health funding provides a poor and misleading signal of how public health resources changed over time, relative to population size, within Georgia counties.

As measured in this study, public health funding increases primarily among counties that are in decline. Furthermore, the analytic strategy chosen by the authors is likely to exacerbate this measurement problem by focusing the analysis only on within-county changes in GGIA funding over time (i.e. fixed effects), and by using lagged values of funding as instruments to predict subsequent funding levels.

These choices lead the study’s estimated “impact” of public health funding to rely primarily on two groups of counties: (1) those that “gain” GGIA funds over time due to persistent patterns of economic and population decline, and (2) those that “lose” GGIA funds due to steady economic and population growth. GGIA funding per capita functions as a proxy measure for economic and demographic deterioration, and no amount of econometric firepower is likely to overcome this statistical phenomenon.

The Bottom Line: Is Public Health A Good Buy?

With these limitations clearly in focus, this study’s estimate of an inverse relationship between GGIA funding and health status no longer appears surprising. Unfortunately, this result fails to tell us anything very useful for policy about the health effects attributable to public health spending.

The public financing mechanisms that support public health activities in the U.S. are complex, variable, and inter-related. Focusing on only one of these mechanisms, while ignoring the independent and interactive effects of other mechanisms, is a research strategy destined to yield the wrong answer.

More and better studies are needed on this topic, but the best available evidence right now (including our national study as well as recent research from California, North Carolina, New York, Washington, Florida and a national cohort of metropolitan communities) suggests that public health is a good buy: Community health improves when governments allocate resources to public health programs and infrastructure like those supported through the Prevention and Public Health Fund.