Join us.

We’re working to create a just society and preserve a healthy environment for future generations. Donate today to help.

Donate

This post was originally published on LPE Blog and is part of a symposium on the future of cost-benefit analysis. Reprinted with permission.

In the actual work of crafting the regulatory safeguards that protect our environment and health, cost-benefit analysis has been largely ineffectual and irrelevant. Indeed, its ineffectiveness has been so profound as to prompt even its most ardent practitioners and proponents to question whether it has any impact on agency decisions at all. Meanwhile, it plays at best a minor role in the legal standards that actually govern agency decision-making. Despite all this, a certain cost-benefit orthodoxy has become remarkably entrenched in environmental policy circles. Especially in an era when so many progressive ideas are in ascendance, why does the idea of regulatory review based on CBA, first brought to us half a century ago by the two Ronalds—Ronald Coase and Ronald Reagan—have such staying power?

Decades ago, political scientist Charles Lindblom observed that proponents of what he called “the synoptic ideal”—the idea that we can comprehensively assess the pros and cons of every conceivable alternative and choose the optimum—inevitably talk as though this approach is the only rational decision-making process. That tendency is on full display among the CBA crowd, who often treat CBA as synonymous with rationality.

The problem with this ideal is that, while attractive in theory, it flounders in practice. When the limitations of bounded rationality and information scarcity render the synoptic ideal unattainable, as they so often do, optimization tools no longer produce rational results. Instead, they risk producing what Herbert Simon called “approximate optimization,” in which the set of consequences and alternatives taken into account are artificially and arbitrarily pruned. That process produces an “optimal decision in the approximated world [that] is not necessarily even a good decision in the real world.”

Even at the EPA, which is frequently held up as the CBA gold standard, the practice of CBA too often becomes just such an exercise in “approximate optimization.” One problem is that they just don’t have the data to quantify all the benefits. In fact, when I took a look at the EPA’s major rulemakings over a 13-year period spanning most of the George W. Bush and Obama administrations, it turned out that the agency left significant categories of benefits unquantified 80 percent of the time. No surprise there when you consider that of the thousands of chemicals currently produced in our economy, only a small subset have undergone sufficient toxicity testing to support regulation. Indeed, of all the pollutants the EPA regulates, there’s really only one—particulate matter—the agency has decent data on. Without sufficient knowledge about the harms produced by pollutants, CBA will systematically ignore the benefits of regulation.

Another problem is that in the majority of cases, the EPA only analyzes the costs and benefits associated with one, two, or maybe three alternatives at most.

If you can’t quantify all the significant benefits, you can’t make a meaningful calculation of net benefits. And if you can’t calculate net benefits, or if you only analyze a couple of alternatives, you can’t find your way to the nirvana of net benefits maximization.

That leaves you with a very different tool from the bright shiny engine of welfare maximization CBA’s adherents have tried to sell us on. If important benefits are left out of the equation the vast majority of the time, then CBA operates at best as an informal screening tool, telling us, if we’re lucky, whether the benefits of a regulation in a rough sense exceed the costs. (When you’re not so lucky and your partial benefits estimate comes out lower than your cost estimate, it doesn’t tell you much of anything.)

Once demoted from a formal optimization tool to a rough screening tool, CBA loses its normative pedigree in welfare economics and joins the ranks of the other perhaps less theoretically beguiling but highly pragmatic cost screening tools that Congress has so often relied on in crafting our environmental statutes. These are the scrappy, street-smart tools of regulatory decision-making, like feasibility analysis, cost-effectiveness analysis, and multi-factor balancing—tools that arguably make up for in pure pragmatic effectiveness what they lack in theoretical elegance. Once your goal is no longer to reach the mythical state of economic efficiency, but rather to ensure that costs are not in some general sense unreasonable, these other tools may actually get you there more quickly, easily, and—dare I say—efficiently.

It’s not that these tools don’t consider costs and benefits. They do. They just do it in a way that doesn’t indulge the mythical fantasy of a one-size-fits-all tool for attaining the “synoptic ideal.” Instead, these tools are tailored to specific contexts and circumstances, and they consider costs and benefits in a way that recognizes and works within data gaps and limits on knowledge. They also avoid the messy and controversial business of trying to express intangible values—things like a long painful cancer death, a species pushed to extinction, or a polluted haze over the Grand Canyon—in terms of dollars and cents.

These more pragmatic decision tools are the bread and butter of actual agency decision making in environmental law because these are the tools that Congress has by and large directed agencies to use in the statutes that govern them. In fact, Congress has only rarely directed agencies to make decisions on the basis of CBA. And when it has, it has either made CBA optional or suggested CBA of the scrappy, informal variety rather than the formal, optimizing kind.

Indeed, in the one instance in which a federal appeals court actually tried to impose a formal optimizing CBA requirement on the EPA, Congress came back and explicitly overruled that court’s holding. After witnessing agency paralysis stymie its efforts to clean up our air and water, Congress wanted tools that worked—that delivered real results. And most of the time the tool that fit the bill was some variety of feasibility analysis, occasionally supplemented by cost-effectiveness analysis or multi-factor balancing.

And while it’s become fashionable to say that the Supreme Court now requires CBA as a matter of rational, non-arbitrary-and-capricious agency decision making, that claim is based on a misreading of the Court’s opinion in Michigan v. EPA—a misreading that’s grown into an urban legend of sorts. The Supreme Court did not say the agency had to do a formal cost-benefit analysis in that case, it said the EPA had to “consider costs.” As we’ve seen, there’s a big difference between the two. Agencies have lots of ways to consider costs, including the CBA alternatives listed above.

The Court made very clear that the choice among those different tools is left up to the agency. Writing for the majority, Justice Scalia explicitly recognized that, “It will be up to the agency to decide (as always, within the limits of reasonable interpretation) how to account for cost.” Moreover, the Court went out of its way to include a specific disclaimer of formal CBA: “We need not and do not hold that the law unambiguously required the Agency . . . to conduct a formal cost-benefit analysis in which each advantage and disadvantage is assigned a monetary value”—a point on which the four dissenting justices specifically agreed.

In short, the kind of formal, monetized CBA that’s become de rigueur in regulatory review doesn’t work very well in practice and is not, by and large, required by law. As the Biden administration sets to work “improving and modernizing” the process of regulatory review, they’d do well to heed the directive of Clinton’s EO 12866 to respect “the primacy of Federal agencies,” as well as the Supreme Court’s admonition that there is no one-size-fits-all tool for regulatory decision making. Agencies should decide how to most appropriately account for costs and benefits by choosing among the wide array of tools available. This choice should be tailored to the particular context in which the rulemaking arises, giving particular attention to the feasibility of quantifying and monetizing relevant costs and benefits, along with the agency’s statutory mandates.