-
The Five W’s of Distributional Analysis: A Primer on Why, When, Who, What, and HoW to Conduct Distributional Analysis
Vasundhara Gaur, Lawrence Berkeley National Lab; and Jason Schwartz, Institute for Policy Integrity
show abstract
Despite longstanding calls for agencies to incorporate equity considerations and distributive impacts into their cost-benefit analyses, empirical evidence in the literature suggests that agencies have not made much progress since 1993 toward the goal of more comprehensive distributional analysis. Among the reasons for this failure may be insufficient guidance to stakeholders and academics on how to produce the data agencies need, and insufficient guidance to agencies on how to take the initial steps toward conducting better distributional analysis. This paper seeks to facilitate progress by fleshing out guidance on distributional analyses for stakeholders, academics, and agencies.
The purpose of this paper is threefold. For stakeholders, we highlight the importance of accounting for distributional effects, and provide guidance on how they can help agencies conduct detailed distributional analyses by engaging in the rulemaking process and providing data and information to agencies where needed. For academics, we highlight the economic underpinnings of traditional CBAs, and provide historical background on the use of distributional analysis in regulatory rulemaking. We also demonstrate that while regulatory agencies in the US have been conducting distributional analyses since the 1970s, their approach has been inconsistent. We examine possible explanations for the same, and discuss paths forward.
For agencies, we provide guidance on when and how they can conduct a distributional analysis. Specifically, we (1) define when law and policy permits, encourages, and mandates distributional analysis, and (2) flesh out how agencies can screen for the relevance, usefulness, practicality, and appropriateness of conducting a distributional analysis. In particular, we expand on guidance from the updated Circular A-4 by providing preliminary definitions and step-by-step advice on how to assess which rulemakings will present the most useful, practical, and appropriate cases for beginning to conduct distributional analysis. Finally, we provide guidance on what stage in the rulemaking process agencies should conduct distributional analyses, and discuss best practices for conducting distributional analyses, including what agencies can do when they are unable to conduct high-quality analyses.
-
Uniform value-per-statistical-life or equity weights? Theoretical and practical considerations
Maddalena Ferranna, University of Southern California; James Hammitt, Harvard University; and Lisa Robinson, Harvard University
show abstract
The value-per-statistical-life (VSL) is the most widely used monetary measure to evaluate changes in mortality risk. Theoretical and empirical studies have shown that VSL estimates vary across individuals depending on their preferences and life circumstances (e.g., income, baseline mortality risk, and age). In particular, VSL is typically increasing in income partly because of its positive dependence on ability to pay. Even though VSL is individual-specific, it is common practice to use a uniform VSL in policy evaluation. Equity is the main justification for using a common VSL as it seemingly avoids making any judgement about how VSL should vary by individual characteristics. In particular, a uniform VSL is tantamount to attaching relatively larger weight to survival benefits for the disadvantaged and lower weight to survival benefits for the advantaged. Thus, a uniform VSL mimics the use of equity weights, while avoiding the challenges of explicitly computing the weights. Through economic modeling and simulation exercises, the paper explores the assumptions, ethical choices, and policy implications of the common practice of using a single, population-average value for mortality risk reductions across all individuals and risks. We find that the bias in policy ranking between using a uniform VSL and equity-weighted benefit-cost analysis depends on three conditions: i) the distributive justice theory captured by the weights; ii) the distribution of survival benefits and policy costs across the population of interest; and iii) the correlation between the survival benefits and the individual willingness to pay for a small change in mortality risk. We show that the use of a uniform VSL promotes distributional justice only under some conditions.
-
Health-augmented lifecycle-model-based estimates of the value of health globally
JP Sevilla, Data for Decisions; Maddalena Ferranna, University of Southern California; and David Bloom, Harvard University
show abstract
We use a health-augmented lifecycle model (HALM) to generate, for most countries globally, spanning all levels of development and geographical regions, (i) age-specific Value of a Statistical Life Year (VSLY) and Value of a Statistical Health Utility (VSHU), which build on cross-country extrapolations of the Value of a Statistical Life (VSL) and information regarding utility function parameters, (ii) simpler and more conservative estimates of age-specific VSLY and VSHU, equal to age-specific levels of full income and full consumption respectively, which don’t require VSL extrapolations or knowledge of utility function parameters, (iii) age-specific VSL from birth onwards, which extend standard VSL estimates that in theory only apply from mid-life onwards, (iv) age-invariant VSLY, VSHU, and Value of a Quality-Adjusted Life Year (VQALY) which build on cross-country VSL extrapolations and information regarding utility function parameters, (v) simpler and more conservative estimates of age-invariant VSLY, VSHU, and VQALY that build on estimates of full income and full consumption but not on VSL or information regarding utility function parameters. We compare our results to the prominent VSL estimates of Robinson, Hammitt, and O’Keefe (2019), and to rule of thumb VQALY measures equal to 1 to 3 times per capita gross domestic product (PCGDP). We discuss the use of our estimates in cost-benefit analysis (CBA) of health technologies and of health-affecting policies, including in equity-weighted CBA.
-
Pareto for Pigs and Puppies?
Kevin Kuruc, University of Texas at Austi
show abstract
The inclusion of animal welfare has the potential to greatly affect cost-benefit analysis. For example, including the harms and benefits that animals experience due to agricultural policy choices would almost certainly change what is considered optimal policy. Using standard tools from economic theory, this paper argues that economists have little choice but to include animal welfare in policy analyses. This result follows from appending a novel, but uncontroversial, inter-species axiom to standard axioms of social choice theory. This inter-species axiom is a simple extension of the familiar Pareto Principle: If an outcome makes no human or animal better off, but worsens the life of at least one animal, it is a social worsening. In conjunction with the standard axioms of social choice (e.g., completeness, continuity, transitivity, etc.), this weak axiom becomes powerful. We prove that the only social objective functions satisfying this combination of axioms are additive between human and animal welfare, with a non-zero weight on animals. In an application of this framework, we show that even very small pareto weights on animals makes their interests quantitatively relevant in cases where large numbers are affected, suggesting that this theoretical point cannot be simply ignored.
-
A Tool for Generating Estimates of WTP for Animal Welfare from Existing Data
Monica Saavoss, USDA Animal and Plant Health Inspection Service; Mark Budolfson, University of Texas at Austi; Bob Fischer, Texas State University; and Kevin Kuruc, University of Texas at Austi
show abstract
This presentation introduces a model for producing estimates of willingness to pay (WTP) for animal welfare from existing data. The model takes estimates of willingness to pay for animal welfare attributes for common food products from the literature and adjusts them according to species and intervention type; thereby, it offers a generalized method for estimating the non-market valuation for any intervention that improves (or worsens) the welfare of any species. The purpose of this model is to address the lack of data about willingness to pay for most interventions to improve animal welfare across most animal species despite clear evidence that the public values animal welfare in general. The model rests on the theory that when consumers spend money or express willingness to spend money (through surveys, choice experiments, or voting behavior), they are basing their decision on two factors: the amount that they believe they are increasing the welfare of the animal and the characteristics of the animal. We use a framework for measuring the amount that consumers are increasing the welfare of animals provided by Cynthia Schuck-Paim and Vladimir Alonso (The Welfare Footprint Project). We use brain mass as a proxy for consumers’ relative valuation of welfare across species and provide evidence for why brain mass is consistent with observed consumer behavior in terms of variation in willingness to pay, consistent with the psychological literature in terms of being correlated with empathy for animal species. This model is preliminary and leaves much room for improvement; nevertheless, it meets the standard that is generally applied to other forms of non-market valuation. The model is silent on normative questions about how humans should value animal welfare.