Some members of Congress apparently do not want agencies to regulate powerful agricultural and pharmaceutical interests in order to protect the public from dangerous risks. Yet, rather than say that — and be held accountable to the electorate for the consequences — they have developed what has become a standard, indeed almost boilerplate pretext to hide their endgame.
Specifically, they have drafted a provision snuck in as a rider to a farm bill that requires agencies to develop elaborate “high standards” for the use of science before they can regulate. Even more problematic than their obscurity is the fact that rather than deferring to the scientific community’s idea of what these high scientific standards should be, congressmen establish the rules of the game on their own. Given their politically-charged origins, it is thus not surprising that these congressionally developed rules are decidedly not in the public interest, nor are they consistent with the true “high standards” of science.
“Good science” sounds like a good thing, like motherhood and apple pie. But, those who spend their lives studying the tedious details of regulations and laws understand that there is plenty at stake in this type of provision. Most obvious is the fact that complying with the additional scientific procedures will slow the agencies’ work further still as they invest added effort into the new ambitious procedures and prepare for inevitable litigation challenges. Lawyers call these types of added mandatory provisions “attachment points,” because high stakes players can latch onto them and use them to bring a seemingly endless stream of legal challenges against the agency, slowing down its work to a snail’s pace.Full text
In 2005, the City of Austin discovered that coal-tar based asphalt sealant was killing the highly endangered Barton Springs salamander. The sealant was leaching off freshly sealed parking lots and entering downstream pools where these fragile animals live. The surprise ending to the City’s detective work was not only that the sealant was gradually destroying its river system but also that other asphalt sealants were far safer. More specifically, when the City investigated the market, it learned that there were other sealants that were vastly less toxic, identically effective, sold at the same price, and in some cases were made by the same company. The EPA and the Consumer Product Safety Commission did nothing in response to this discovery, so the City of Austin passed an ordinance to ban the use of the highly toxic variant of asphalt sealant. Home Depot followed the City’s lead and no longer carries the sealant on their shelves.
Green chemists tell me there are many stories like this one. The news is filled with examples of end products that should have never come to market if toxicity were factored into the equation. Corrosive hair permanents, toxic drywall, and cancerous air fresheners all replay the same theme – the market is glutted with duplicate products that are unnecessarily hazardous. Consumers can’t run toxicity tests on every product that they buy, and if regulators don’t demand this testing and analyze it, ignorance - for the manufacturers - is bliss.
The regulatory statutes governing these products and the chemicals that are used to produce them do not require agencies to cull out these useless toxic products that would be outcompeted by greener products. In fact, the design of our current regulatory statutes impedes the ability of the agency to find and regulate the unnecessarily toxic products and chemicals. Under current laws, to ban a hazardous chemical that is both more toxic and less useful than a competitor, an agency must generally conduct a full-scale assessment of all of the risks of the chemical to man, the environment, and workers who produce it and balance those against the uses, sales, and other data about the chemical. The availability of safer products – used for the same purpose -- is arguably besides the point under our current regulatory program unless the agency decides that the chemical needs to be banned, or as one court put it, subjected to the “death penalty.”
While most of these disappointing statutes focus on the regulation of end products, one statute – the Toxic Substances Control Act – actually addresses the underlying, individual chemical ingredients themselves. The hope is that by eliminating unreasonably unsafe chemicals, we can improve many of the end products. The problem is that TSCA is similarly weak in that the EPA must first prove that a chemical is unsafe as opposed to chemical manufacturers first demonstrating that their products are safe and non-toxic. The Agency must prove that a chemical contains an “unreasonable risk” before regulating it at all. In effect, public health takes a back seat to the imperatives of industry and greener chemicals are often ignored and overlooked as marketable solutions.Full text
The Obama Administration’s newly released science policy memo is an important and largely positive development in the effort to protect science and scientists from politics. In particular, the policy takes aim at many of the abuses of science and scientists that defined the Bush era. It’s particularly encouraging, for example, that the policy calls on political appointees to take a hands-off approach to science.
That said, in several areas, the policy could have, and should have, gone farther. The tension between science and politics predates the Bush Administration, and systemic reforms are long overdue. The Obama Administration science policy memo was an opportunity to address these issues, but it focused instead on fixing problems primarily from the Bush Administration.
The memo, issued by John Holdren, Director of the White House Office of Science and Technology Policy (OSTP), does not address the permissive approach many agencies have used in their reliance on privately produced science to formulate federal regulations. Private science, generally produced by regulated parties, often involves an inherent conflict of interest. Public access to underlying data and methods of privately produced science is also limited and can sometimes be completely unavailable. Yet the memo focuses on the science produced within the agencies, and not the science that agencies use more generally to develop regulations. Since private science is often the primary if not the exclusive basis for federal rulemakings in many important legislative areas, the memo avoids tackling a serious, systemic problem in the agencies’ use of science that should have at least been acknowledged, if not addressed. Ironically, in fact, the memo implies that government science lacks full credibility and adequate peer review, despite the fact that government science has strong safeguards in these areas, compared to private science.Full text
There is plenty of environmental despair right now . . . spreading oil in the Gulf, legislative inaction on climate change and a host of other issues, and the sense that for every step forward, there is a special interest that will take the nation two steps back.
So, in this downward spiral of disappointments, is there any ray of hope? Rena Steinzor and Sidney Shapiro hit upon one promising possibility in their important new book, The People's Agents and the Battle to Protect the American Public: Special Interests, Government, and Threats to Health, Safety, and the Environment. After cataloging the sorry state of the regulatory institutions tasked with protecting health and the environment, the authors offer innovative suggestions for a set of positive metrics that not only help hold agencies publicly accountable, but also reward agencies for acting proactively. An added, invaluable attribute of these positive metrics is that they can be implemented without additional funding or substantive legislation. Unlike the 1993 Government Performance Results Act and other efforts to devise benchmarks, moreover, Steinzor and Shapiro’s positive metrics proposal focuses on accessible policy goals, clear measures of goal-accomplishment, and a comprehensive diagnosis by the agency when a goal is not met. The causes of agency failure could include, for example, blaming statutory mandates and judicial opinions as well as internal agency handicaps like resources and staffing. For those of us on the sidelines witnessing a succession of regulatory problems and sensing a future of institutional drift, the notion of grounding agency performance in publicly accessible metrics is sheer genius.Full text
After laying dormant for decades, industries’ abuse of EPA’s permissive confidential business information program (CBI) is finally getting some serious attention. An investigation in the Milwaukee Journal Sentinel, and more recently articles in the Washington Post and Risk Policy Report; a report by the Environmental Working Group; and posts by Richard Denison at EDF, are turning the tide. Those of us at CPR who have spilled ink on various CBI problems over the years (i.e., Mary Lyndon, Tom McGarity, Sid Shapiro, Rena Steinzor, and myself) are thrilled to witness how these journalists and environmental watchdogs have finally managed to budge EPA on its contemptible program.
One document that has been referenced in several recent reports, but that I think deserves further attention, is an extensive empirical study of EPA’s CBI program by a consultant, Hampshire Associates. EPA commissioned this study in 1992 to evaluate whether EPA’s CBI program was in need of reform. Hampshire Associates documented extensive abuse of the CBI program by regulated industry, particularly in regulatory programs in which EPA does not require any justification for a CBI claim. The report is particularly relevant to current debates because virtually nothing has changed in EPA’s CBI policies since 1992. Continued evidence of CBI overclaiming over the years (see pages 129-35 and 146-47 in this article, where David Michaels and I argue that EPA's CBI program is far too lax) suggests that, if anything, abuse of the CBI privilege may only be getting worse, rather than better since the Hampshire Associates' study.Full text
On Wednesday, the Bipartisan Policy Center's Science for Policy Project released its report (press release, full report) on the use of science in regulation-making. I was on the panel and thus am a bit biased, but I think the report makes a terrific contribution. It significantly narrows the range of positions that can be credibly debated about the appropriate level of oversight needed to ensure the quality of regulatory science. At the same time, it introduces some important new ideas for improving science-policy, like creating incentives for scientists to provide stronger peer review. In the process of finalizing the report, we all had to make some concessions. Rather than feeling that the resulting recommendations were of the lowest-common-denominator type, however, I believe the entire panel felt that the report contains a lot of specific details that, if implemented, would be dramatic improvements on the status quo. Hopefully the report will be useful to OSTP’s work and other highly respected groups, like CPR, will agree with many of the recommendations.Full text