Join us.

We’re working to create a just society and preserve a healthy environment for future generations. Donate today to help.

Donate

New EPA White Paper on Probabilistic Risk Assessment

Earlier this month, EPA released for public comment a new white paper on probabilistic risk assessment, marking the Obama Administration’s first major foray into the contentious debate about EPA's evolving risk assessment methods. Back in May, EPA Administrator Lisa Jackson announced changes to the way the Office of Research and Development (ORD) will update risk assessments for the IRIS database, but that announcement was made without any real public input and it only implicated the inner workings of one program office (albeit an important one). The public comment period on the new white paper presents the first opportunity for the various stakeholders who usually weigh in on EPA’s risk assessment policies to have some say in the new administration’s policies.

The new white paper, Using Probabilistic Methods to Enhance the Role of Risk Analysis in Decision Making, focuses on one of the fundamental problems in regulatory risk assessment – how should risk assessors and risk managers address the uncertainty and variability intrinsic to the risk assessment process?

The most straightforward way to answer that question, and EPA’s approach in many situations, is to use default assumptions. When the pesticides program staff is working on setting a limit for pesticide residue on apples, they can use a standard assumption about the number of apples a person eats. Or when ORD staff updates IRIS profiles, they can assume a linear dose-response relationship for a suspected carcinogen’s toxicity. But as scientific knowledge about certain parameters and models used in risk assessments grows, default assumptions might legitimately be replaced by data collected in the real world. Recognizing that every parameter and model used in a risk assessment has some inherent level of uncertainty, and that variability in the population can have a significant impact on risk determinations, risk assessors can use probabilistic data to replace point estimates of specific parameters or generic model assumptions.

Going back to the pesticide residue example, prior to 1998, EPA assumed that 100% of a crop with registered uses of a pesticide was treated with that pesticide, all of the crop that ended up on grocery store shelves had residues of the pesticide at the maximum level allowed under the law, and the relevant population ate the contaminated crop often (at the 95th percentile). Since then, EPA has started using probabilistic risk methods to fill in these data points. Instead of assuming that everyone eats a lot of a specific crop, EPA collects data from FDA’s Continuing Survey of Food Intake by Individuals (CSFII). Instead of assuming that every piece of fruit has the maximum amount of allowable pesticide residue, EPA collects data from “crop field trials, USDA’s Pesticide Data Program (PDP) data, Food and Drug Administration (FDA) monitoring data, or market basket surveys conducted by the registrants.” All of these new data are then run through a new risk model that calculates not only the population’s risk, but also risks to various subpopulations (e.g., infants, kids between the ages of 6 and 12, etc.) (Check out Case Study 4, pp.58-59 in the white paper.)

Proponents of using probabilistic methods to address uncertainty and variability in risk assessment argue that these methods will result in a “fuller characterization of risk,” that they can help identify vulnerable populations, and that they can highlight the spots where additional data could improve a risk assessment. But is it worth the time and effort? Collecting, validating, and analyzing all of the data needed to replace default assumptions with probabilistic models takes time and money. It creates the risk of wading into a regulatory quagmire. Again, going back to the pesticide residues, are market basket surveys conducted by pesticide manufacturers reliable data sources for estimating pesticide residues? Or is it better to assume maximum allowable residues, avoid the disputes about data reliability, and move on to the next decision? The decision whether to use probabilistic methods (vs. default assumptions) to fill data gaps is as much a policy decision as it is a scientific endeavor.

Unfortunately, EPA’s white paper, despite its great background on what probabilistic methods are, what they can do, and how they work, does not do a very good job of describing the time, money, or other resources that are necessary to produce useful information using those methods. The paper has 16 case studies carefully chosen to show the broad range of probabilistic methods that EPA has used in recent years to add detail to various risk assessments. The case studies neatly show that probabilistic methods can sometimes lead to more stringent regulations, sometimes less stringent standards, and sometimes have no effect. But what they lack is any quantitative evidence about the resources used in each case.

The white paper recommends that EPA improve its internal capacity for utilizing probabilistic risk assessment methods through training, knowledge sharing, and the development of general policies and guidance. This last point is one worth echoing. It is immensely important that EPA establish standard procedures that will help risk assessors and risk managers determine which tools to use and when to use them, so that the risk assessment process does not become so bogged down in data collection and analysis that the ultimate regulatory decisions needed to protect human health and the environment are unreasonably delayed.

(EPA has extended the comment period for the white paper until September 16. Docket number EPA-HQ-ORD-2009-0645 can be accessed here.)

Showing 2,821 results

Matt Shudtz | August 31, 2009

New EPA White Paper on Probabilistic Risk Assessment

Earlier this month, EPA released for public comment a new white paper on probabilistic risk assessment, marking the Obama Administration’s first major foray into the contentious debate about EPA’s evolving risk assessment methods. Back in May, EPA Administrator Lisa Jackson announced changes to the way the Office of Research and Development (ORD) will update risk […]

Yee Huang | August 28, 2009

Nationwide Implications from EPA Nutrient Pollution Settlement

Last week, the Environmental Protection Agency agreed to set specific, statewide numeric standards for nutrient pollution in Florida, marking the first time the EPA has forced numeric limits for nutrient runoff for an entire state. This settlement, based on a 1998 EPA determination that under the Clean Water Act all states were required to develop […]

Yee Huang | August 27, 2009

Lake Lanier Case a Lesson on Water Resources and Land Use Planning

In July, a federal judge settled a nearly 20-year legal dispute among Alabama, Florida, and Georgia over the use of water from Lake Lanier, dealing a tough blow to Georgia. The Army Corps of Engineers constructed Buford Dam in the 1950s, creating Lake Lanier as a reservoir for flood control, navigation, and hydropower. But Atlanta […]

Holly Doremus | August 26, 2009

Would a CO2 ‘Monkey Trial’ Improve Scientific Integrity and Transparency?

Cross-posted by permission from Legal Planet. As reported in the L.A. Times and Wall Street Journal, the U.S. Chamber of Commerce has petitioned EPA to hold a trial-type hearing before finalizing its proposed finding that greenhouse gas emissions endanger public health and welfare. (We blogged about the proposed endangerment finding here.) The main argument in […]

Alice Kaswan | August 26, 2009

Why a Cap-and-Trade System Needs a Regulatory Backstop

As fellow environmental law professors David Schoenbrod and Richard Stewart take their advocacy for market mechanisms and skepticism about regulation public, with an op-ed in the Wall Street Journal on Monday, I thought it was time to speak out in favor of a role for regulation. They claim that the climate change bill that passed […]

Rena Steinzor | August 25, 2009

Obama EPA Takes Strike One on Atrazine

The publication of in-depth investigative reporting on complex regulatory issues is a phenomenon that has become as rare as hen’s teeth, and I greeted the front-page story in Sunday’s New York Times on the perils posed by atrazine with a big cheer. Unfortunately, despite reporter Charles Duhigg’s best efforts, the response of Environmental Protection Agency […]

Holly Doremus | August 24, 2009

Atrazine in Drinking Water

This item cross-posted by permission from Legal Planet. Atrazine is suddenly very much in the news. Sunday’s New York Times features a major story about whether the EPA’s current standard for acceptable levels of atrazine in drinking water is tight enough to protect human health. Yesterday’s Peoria Journal carried a story about a class action […]

Rena Steinzor | August 21, 2009

The Grassley Crusade against Medical Ghostwriting: Let’s Not Burn Witches at the Stake

Sen. Charles Grassley (R-IA), of late in the news for his role as power player in the health care debate, has long enjoyed a reputation as a Republican maverick. One reason for that reputation is his highly publicized crusade to improve ethics in the medical profession, specifically with respect to “ghost writing” of medical journal […]

Catherine O'Neill | August 20, 2009

USGS’s Study on Mercury in Fish: Trouble in the Water

The United States Geological Survey (USGS) issued a report today finding widespread mercury contamination in U.S. streams. The USGS found methylmercury in every fish that it sampled – an extraordinary indictment of the health of our nation’s waters. The USGS reported that the fish at 27% of the sites contain mercury at levels exceeding the […]