Join us.

We’re working to create a just society and preserve a healthy environment for future generations. Donate today to help.

Donate

Restoring Scientific Integrity to the Regulatory System Means Overhauling Cost-Benefit

Responsive Government Air Defending Safeguards Water

Read more in this series

The Road Ahead

Also from James Goodwin: 

Cost-Benefit Analysis Is Racist

The Progressive Case Against Cost-Benefit Analysis

Overview: Beyond 12866: A Progressive Plan for Reforming the Regulatory System

What’s a Child’s IQ Point Worth?

In the wake of the Flint water crisis, which experts estimate may have exposed as many as 12,000 children to dangerous lead-tainted drinking water, the U.S. Environmental Protection Agency (EPA) began working to update its decades-old regulation aimed at cleaning up drinking water infrastructure to prevent lead and copper contamination. In 2019, more than five years after Flint officials issued the first "boil water" advisory of the crisis, EPA issued its proposed Lead and Copper rule.

The accompanying cost-benefit analysis is illustrative of the absurd methodology of cost-benefit analysis in the regulatory context. One of the proposal’s benefits, regulators reasoned, is that fewer children would suffer IQ loss as a result of the irreversible neurological damage that can occur through exposure to elevated levels of lead. Absolutely true. Then, since cost-benefit analysis requires that everything be expressed in dollars and cents, the EPA’s analysts set out to determine how many dollars preserved IQ points were worth. Their solution was to calculate the reduction in expected lifetime earnings that these children might experience due to having a lower IQ. And that was it, the full extent of thousands of children being spared neurological damage, as if “reduced earning potential” is the only negative consequence to flow from IQ loss. Even more remarkably, the agency economists then reduced this value to account for what they regarded as the “costs” of having a higher IQ – costs that include direct education-related expenditures (e.g., tuition and books), as well as foregone earnings due to spending more time in school instead of in the labor force. This “higher IQ penalty” ends up reducing the value of an IQ point by about 6 percent.

I’m a parent, and many of my friends are parents. Many of you reading this right now are parents. Would any of us calculate the value of sparing our children from neurological damage this way?

Such blind-to-reality calculations are sadly commonplace in the practice of the unique form of cost-benefit analysis that now dominates in the U.S. regulatory system. Defenders of the approach claim that it makes regulatory decision-making more “rational” and insulates the process against improper political or subjective considerations. Yet, as the EPA’s Lead and Copper rule illustrates, the methodological techniques this form of cost-benefit analysis uses can be arbitrary, unscientific, ethically dubious, and at times even absurd.

Dollar Signs Everywhere You Look

The root of the practical problems for this “monetize everything” version of cost-benefit analysis is that it asks questions for which accurate and meaningful answers are almost by definition impossible to supply. That this analytical framework is charged with lofty responsibilities but left with inadequate powers to fulfill them makes the resulting corruption of its methodologies and techniques almost inevitable.


I’m a parent, and many of my friends are parents. Many of you reading this right now are parents. Would any of us calculate the value of sparing our children from neurological damage this way?


This form of cost-benefit analysis aspires to identify “socially optimal” policies – that is, policies for which the net benefits have been maximized. To do this, the framework not only requires a comprehensive catalog of all the rule’s impacts, both good and bad; it then insists on converting those impacts to the common metric of dollars and cents. Achieving this first step involves building a reliable baseline – that is, a hypothetical alternative world without the policy against which to measure the policy’s real-world impacts. How many children would not suffer IQ loss in this hypothetical world without the policy? How many acres of wetlands would have been lost? How many premature deaths would have occurred?

For the second step, the analysis attempts to place a dollar-and-cent valuation on all of the anticipated impacts, including those without a market-set price because they involve the kinds of things that are not bought and sold in the marketplace. What is the value of avoiding childhood neurological damage? Each acre of wetland that was saved? Each premature death prevented? What is the value of preserving American Indian tribes’ cultural identity by preventing water pollution that would disrupt their traditional fishing practices?

Because this version of cost-benefit analysis is concerned with identifying the optimal policy solution, it must repeat these two steps for a large number of potential policy options, calculating the net benefits for each. The option that yields the largest net benefits is theoretically the optimal one.

The data needed to answer these kinds of questions often simply do not exist or cannot be obtained at a reasonable cost. The resulting data gaps make it impossible to calculate net benefits with anywhere near the precision that this version of cost-benefit analysis demands. Even more insidiously, though, these data gaps asymmetrically affect the benefits side of the ledger, which ends up systematically skewing the results against stronger safeguards. In some cases, the analysis responds to these gaps by ignoring the regulatory benefits or arbitrarily assigning them a value of $0. In other cases, it simply plows ahead and seeks to fill the gaps by resorting to clumsy techniques that involve pseudo-science, arbitrary assumptions, and unrealistic leaps of logic.

The Pseudo-Science of Monetization

The results that the monetize-everything version of cost-benefit analysis produces have the veneer of scientific credibility and precision – they’re often calculated to the last penny, after all. But when the methods and techniques used to generate those results are subjected to closer inspection, the patina of accuracy and reliability loses its glow.

  • Putting a price on human life. Many public health and environmental regulations have the effect of preventing premature deaths, and so the monetize-everything version of cost-benefit analysis seeks to account for these benefits by placing a monetary value on a human life, which typically comes out to about $10 million. The preferred method for determining this value is through “wage premium” studies, which attempt to measure the additional compensation workers ostensibly “receive” in exchange for taking jobs that involve a slightly higher risk of death. If, for example, a job involves a 1 percent increase in risk of death, the study would then calculate the value of human life by multiplying the compensation premium by 100 (i.e., multiplying the 1 percent risk of death by 100 scales it up to a 100 percent risk, or certain death, which is supposed to serve as a proxy value for preventing that certain death). This resulting number is referred to as a “value of statistical life” or VSL. These studies rest on the unstated assumption that wage premiums reflect a genuine market-based transaction involving changes in the risk of death, and thus can serve as a proxy for human lives, which are not bought and sold in the marketplace. In stark contrast to this assumption, the basic preconditions necessary for workers to obtain the full measure of compensation they believe they deserve for assuming risky work rarely exist in reality. Among other things, workers typically lack complete information regarding the risks they face in the workplace and often face significant systematic bargaining power asymmetries relative to employers. In addition, it seems nonsensical to believe that the value assigned to a 1 percent risk of mortality would scale up in a linear fashion to 100 percent risk – i.e., certain death.
  • A “good enough” number for non-fatal cancers. In the cost-benefit analysis for its 2000 rule to limit arsenic in drinking water, the EPA found there was no economic research available for putting a monetary value on the non-fatal cancers the rule would prevent. To overcome this data gap, the agency simply borrowed the monetary value it used for preventing cases of chronic bronchitis, without offering any justification or explanation of why it considered the two non-fatal illnesses sufficiently analogous. To make matters worse, the source of the chronic bronchitis value itself is noteworthy for its lack of rigor and credibility: It was derived from single survey of “389 shoppers from a blue-collar mall in Greensboro, North Carolina” conducted in the late 1980s.
  • Pricing prison rape. While the use of surveys to monetize non-market goods is common in the monetize-everything version of cost-benefit analysis, perhaps none is more notorious than a 2012 Department of Justice rule aimed at preventing prison rape. There, the agency used a survey to assign values to 17 different “categories” of rape and sexual assault – which it referred to as a “hierarchy of sexual victimization types” – each calculated to the last dollar. In this macabre exercise, rape was treated as just another market exchange, and preventing prison rape was only worth doing if the victim – not the perpetrator – was willing to pay the dollar cost of avoiding the crime.

As these examples illustrate, for many of the most important regulatory impacts, the task of reducing real-world human benefits to numbers that can be squeezed into a ledger introduces into regulatory decision-making new sources of irrationality and arbitrariness where none had previously existed. Despite what supporters of this form of cost-benefit analysis might claim, the methodology makes regulatory decision-making less scientific and rational, undermining its credibility and legitimacy.

A Detour from Rigorous and Rational Regulatory Analysis

Importantly, agencies like the EPA and the DOJ are not legally required to rely on this approach for evaluating the impacts of their regulations. In fact, the flaws in this approach seemed to be sufficiently clear to Members of Congress when they wrote most of the protective statutes agencies implement through regulations that they routinely opted for other decision-making approaches to weighing costs and benefits, such as feasibility analysis and multi-factor qualitative balancing. Significantly, all of these alternative approaches enlist a practical stopping mechanism to keep costs in check relative to benefits but do so in ways that are more intuitive, rational, and credible. The Supreme Court has likewise endorsed the notion that agencies generally have significant discretion in how they evaluate regulatory impacts, observing in Michigan v. EPA that ultimately it is “up to the agency to decide how to account for costs [and benefits].”


For many of the most important regulatory impacts, the task of reducing real-world human benefits to numbers that can be squeezed into a ledger introduces into regulatory decision-making new sources of irrationality and arbitrariness where none had previously existed. Despite what supporters of this form of cost-benefit analysis might claim, the methodology makes regulatory decision-making less scientific and rational, undermining its credibility and legitimacy.


So, why would agencies continue to use this problematic version of cost-benefit analysis that both Congress and the Supreme Court have rejected? Several converging factors seem to play a role. The most significant of these is Executive Order 12866, a directive issued by President Bill Clinton in 1994 that provides the overarching regulatory policy framework across the executive branch. Most significantly, this order establishes the institutions of centralized review at the White House Office of Information and Regulatory Affairs (OIRA) and the broad mandate for subjecting many executive branch agency rules to cost-benefit analysis.

Critically, Executive Order 12866 defines its broad cost-benefit analysis mandate in terms that are closely aligned with the narrow, monetize-everything version. For example, it states that “in choosing among alternative regulatory approaches, agencies should select those approaches that maximize net benefits.” In specifying how agencies should carry out the task of conducting cost-benefit analyses on their rules, the order also directs agencies to include “to the extent feasible, a quantification of those benefits [and costs],” as well as an “assessment, including the underlying analysis, of costs and benefits of potentially effective and reasonably feasible alternatives to the planned regulation.”

Other important factors include the role that OIRA’s staff play in reviewing rules. While not required by Executive Order 12866 (or any other law or policy), these staff historically have been dominated by individuals trained to be economists. This pattern has contributed to an organizational culture at OIRA that reinforces the primacy of the monetize-everything version of cost-benefit analysis, as well as its pronounced skepticism toward regulation more generally.

It is hard to deny the “cognitive lure” of the crisp numbers that this narrow approach to cost-benefit analysis produces, and undoubtedly agencies feel the strong pull those numbers exert. The results of an analysis are often one among many of the details that agencies include in the fact sheets and other materials that accompany the announcement of significant final rules. Yet, these numbers often seem to steal the limelight, as they tend to “crowd out” other qualitative descriptions of the rules’ impacts. For instance, they often emerge as major talking points in the news stories and statements from members of Congress regarding new rules.

And it’s no accident that there are so many institutional forces that push agencies to employ the monetize-everything version of cost-benefit analysis despite its intrinsic flaws. For decades, industry-allied conservative think tanks have carried out a concerted campaign aimed at creating the conditions for elevating the prominence of this form of analysis in the regulatory system. At the heart of this campaign was a recognition that such analysis would serve as a powerful weapon for defeating robust safeguards – and to do so in a way that would not attract public disapproval or controversy.

Getting Regulatory Analysis Back on the Right Track

Ending the hegemony that the monetize-everything version of cost-benefit analysis currently enjoys in the regulatory system will thus require significant wholesale changes to how OIRA review is conducted, as well as to the practice of regulatory impact analysis itself. Broadly speaking, these changes should seek to give agencies greater freedom to take the Supreme Court’s admonition in Michigan to heart and pursue better approaches to assessing the advantages and disadvantages of their rules – approaches that avoid the theoretical and methodological pseudo-science of the monetize-everything version of cost-benefit analysis.

The first and most important step to restoring rationality, integrity, and credibility to regulatory analysis is to rescind Executive Order 12866 (along with any successor orders that build on it) and replace it with a new executive order that advances a progressive vision of regulation. Progressives should join together and fight for a new executive order that seeks, among other things, to ensure a robust and nuanced description of the consequences of regulations, in the qualitative terms that preserve the richness and diversity of what is at stake, and for whom.

Critically, the approach called for in the order would not attempt to translate every value into dollars and cents – no matter what gets lost along the way in such translations. Rather, it would look to economists for matters within their narrow realm of expertise while reserving space for other relevant disciplines for matters within their respective domains (e.g., epidemiology, sociology, etc.). It would thereby enhance the credibility and integrity of agencies’ assessments, making them mesh more naturally with ordinary people’s experience. Such an approach would also have the benefit of promoting regulatory decision-making that better accounts for bedrock progressive values, such as equity and justice.

To accomplish this new approach to regulatory analysis, the new executive order on progressive regulatory policy should:

  • Prohibit agencies from using the monetize-everything version of cost-benefit analysis, unless otherwise explicitly required by law to do so. In particular, the order should explicitly prohibit agencies from attempting to place a monetary value on any non-market goods. If a benefit defies monetization, the agency should not pretend otherwise. It should also explicitly prohibit agencies from calculating their rules’ net benefits, which effectively forces agencies to monetize the non-market impacts of their regulations.
  • Direct agencies to use the context-specific methods specified in their authorizing statutes for considering costs and benefits. As part of this alternative approach to regulatory analysis, the order should encourage agencies to describe relevant impacts quantitatively (when possible) and qualitatively. When performing a quantitative analysis, the agency should be encouraged to use natural units (e.g., number of premature deaths prevented or acres of wetlands protected) or other terms that are both meaningful to ordinary members the public and consistent with the specific decision-making criteria called for in the standard set by the authorizing statute. Taken together, these provisions would serve to replace the specific cost-benefit analysis requirements contained in Executive Order 12866.
  • Prohibit agencies from summarizing their analysis in a chart that presents total quantified costs and benefits unless the chart also includes qualitative descriptions of all significant categories of non-quantifiable impacts. The order should similarly prohibit agencies from discussing quantified costs and benefits in other materials (e.g., fact sheets, press releases, and slide presentation) unless they include a meaningful qualitative description of unquantified impacts. Such qualitative descriptions should be clearly anchored in the objectives and goals of the relevant authorizing statute, as well as any decision-making factors specified by the applicable statutory standard.

Second, it will be essential to bring about a fundamental change in OIRA’s organizational culture as well. To accomplish this, a future president would need to start at the top and appoint an OIRA administrator with a demonstrated commitment to securing consumer and worker safety, public health, and the environment through effective regulation. Such a person would be someone who, through a long career of advocacy for public safeguards, has come to understand the flaws of the monetize-everything version of cost-benefit analysis.

Once confirmed, the new OIRA administrator should commit to using the tools at their disposal to change how OIRA conducts its day-to-day work to reflect the new approach to regulatory analysis. One important step an OIRA administrator can take in this regard is to diversify the professional staff at OIRA. Among other things, the OIRA administrator could seek to promote greater disciplinary diversity, as well as greater diversity in terms of the life experiences of OIRA’s staff. In particular, hiring staff with expertise in areas like sociology, public health, law, environmental sciences, and communications, instead of economics, will help to alleviate the strong bias towards the monetize-everything version of cost-benefit analysis. OIRA should also strive to increase the diversity of the economists it hires by seeking out individuals with training and expertise in heterodox approaches to the discipline (i.e., approaches that offer an alternative vision from the neoliberal tradition that has prevailed in economics over the last several decades).


Learn More About Cost-Benefit Analysis and the Need for a Progressive Overhaul of the Regulatory System

CPR Member Scholars and staff have researched and written extensively about cost-benefit analysis, its long reach, and its many abuses and misuses. Read the most recent posts here. You may also want to read their reports and op-eds on cost-benefit.

Visit our clearinghouse page on cost-benefit analysis, OIRA, and the need for reform of the regulatory system.

Responsive Government Air Defending Safeguards Water