Background
To the casual observer, scientists might appear to be the group of participants with the most influence over public health and environmental policy. Demands that we use “sound science” to make decisions about whether to prevent potential risks are ubiquitous. A wide range of decision-makers in the legislative, executive, and judicial arenas have urged that scientists be elevated to the pinnacle of power, entrusted by the rest of us with the authority to resolve our most important and complex problems. Deference to scientists as the ultimate arbitrators of policy resonates every time Congress debates such controversies, suggesting that lawmakers and those who work to affect their decisions have nothing but respect for the sanctity and wisdom of the scientific process and its results, wherever they may lead us.
The Issue |
Why, then, do many scientists deployed at the front lines of the most heated disputes – over global warming, mercury in the human food chain, or the safety of common drugs – feel not like anointed and omniscient saviors, but instead like hunted prey? For all the lip service paid to the naïve idea that science has definitive answers, the moment that a group of scientists announce a discovery that has significant economic implications for industry or some other affected group, scientists in the spotlight soon learn that attacks are to be expected, and not deference.
Beset by scientific misconduct allegations or threatened with breach-of-contract lawsuits if research is published over a private sponsor’s objections, growing numbers of scientists find themselves struggling to maintain their credibility in a climate designed to deconstruct the smallest details of their research. So severe are these problems in some settings that the most reputable scientists warn that legally-based harassment could deter the best and the brightest young scientists from entering the very disciplines that have the greatest potential to inform public affairs.
These events have the capacity to undermine scientific integrity to such an extent that we are deprived of the progress science could offer on a wide range of pressing social problems. When scientists cannot control their own research agendas because they are preoccupied with responding to subpoenas and data requests, when private funding comes only with long strings attached, and when scientists are sanctioned for communicating results that do not serve the interests of their sponsors, the core values that define science are threatened.
What People Are Fighting About
Scientists unfamiliar with the legal system generally assume that the path of their research from the laboratory to policy-makers is a straight and uncomplicated one. Research is published in a peer-reviewed journal so that it can be judged on the merits by knowledgeable colleagues. Well-designed studies with original discoveries can then play a significant role in formulating social policy, while studies with evidence of bias or unclear methodology are discounted. Scientists expect that when policy-makers are confronted with important questions regarding scientific evidence, they will utilize a “weight of the evidence” approach, viewing available data as a composite and reaching conclusions only after considering the strengths and weaknesses of all of the individual pieces of research. After all, judicial, legislative, and regulatory institutions have the same objectives as scientific institutions: improving social welfare. Rational use of research by policy-makers is one of the most promising ways to make sure that this overriding objective is achieved.
Unfortunately, rather than incorporating science into policy dispassionately and using research to further a quest for truth, the legal system makes most decisions through an adversarial process driven by affected parties who interpret and re-interpret the science to prove that they should “win.” This method of making decisions is largely alien to scientific practice and runs counter to the development of reliable research.
Several concurrent developments have accelerated and intensified the legal system’s distorting effect on science. The regulatory system has expanded dramatically, spawning a growing body of statutory and administrative law, as well as multiple agencies that regulate products, manufacturing processes, and waste disposal activities through thousands of separate requirements. Regulators look to science for guidance when they make difficult decisions regarding the stringency of public health and environmental protection. The more emphasis that regulators place on science, the greater the affected parties’ incentives to do what they can to control its content and production.
Equally dramatic is the expansion of liability for damages caused by defective products, including toxic chemicals. It is not uncommon for liability judgments to be in the millions of dollars for a single victim, and the science supporting plaintiffs’ alleged injuries is critical in determining whether they win or lose. The defendants have comparably strong incentives to bring pressure to bear on those producing such science.
Finally, the U.S. government continually fails to provide meaningful financial support to public research on health and the environment. Rather than increasing funding as environmental and health sciences grow in importance, public investment in badly needed research has been relatively flat for the past several decades. This dearth of research support is based in part on the hope that private parties will pick up the slack. Yet that expectation overlooks the powerful disincentives for manufacturers to test their products in the absence of a licensing system, such as the one we have for new drugs. As long as scientific information can be incriminating and can lead to costly liability and regulatory requirements, ignorance is bliss.
While each of these factors has a powerful effect on science, their synergism can prove overwhelming. The Information Age intensifies these effects in ways not imaginable a decade ago. With the invention of the worldwide web, adverse information about a product circulating in commerce travels rapidly, prompting rapid fluctuations in markets and expanding liability for mistakes in amazingly short order. Because scientific data appear to have gained the legal power of ending businesses and entire manufacturing sectors, isolated pieces of research can attract scrutiny more fierce than most researchers should be expected to withstand.
These trends and their complex interactions have multiplied the opportunities for destructive collisions between the worlds of law and science. Science is used and often misused in making regulatory, legislative, and judicial decisions. Scientists, with little or no preparation and often without their consent, are drawn into combat between warring advocates within the legal system. Too often, these scientists become lightning rods in battles involving clashes between powerful institutions, both public and private.
Four aspects of these clashes are the most worrisome. The first is legally backed efforts by special interests to silence scientists and discredit their research. A number of scientists who embark on research that suggests that industrial activities or products are more harmful than originally supposed have been exposed to unwarranted professional attacks against their scientific integrity and the validity of their research. These assaults fly in the face of an essential characteristic of scientific inquiry – honest, disinterested debate over methods and data.
Second, shortfalls in public funding of research, partnered with the absence of standardized testing requirements for many types of harms, combine to place the private sector at the helm of many of the most important research projects. When the stakes are high enough, private interests can commission research to suit their needs. Legal instruments, such as contractual clauses that bar scientists from publishing their findings without the private sponsor’s consent, make these arrangements enforceable. Despite widely publicized lamentation about the “kept university,” academic administrators and lawyers are often ill prepared to defend scientists enmeshed in such disputes.
Third, stakeholders and even government officials are able to manipulate scientific information to make it look like the decisive basis for policy when in truth the decisions necessarily hinge on choices made on the basis of moral values, including fairness, and the overall social good. Legal rules not only fail to discourage this practice, but actually encourage it and in rarer cases, such as international trade treaties, may make it effectively mandatory. The resulting lack of accountability and transparency further distance attentive participants from the policy-making process and obscure the respective roles of science and policy in informing environmental and public health decisions.
Fourth, and perhaps most pernicious is the deconstruction and “corpuscularization” of science. Science is particularly susceptible to deconstruction because scientists themselves believe in subjecting research to vigorous scrutiny. But legal processes invite warring interests to ignore the “weight of the evidence,” instead fragmenting research into individual slivers that are critiqued in isolation from the study as a whole. As a result, decision making that depends on these studies is mired in greater uncertainty than exists in the corresponding scientific debate.
CPR’s Perspective
If the past decade portends anything for the future, efforts to undermine valuable research and discredit researchers will continue to increase in number, vigor, and creativity. In just this decade, interest groups offended by scientific discoveries that their products are harmful lobbied Congress, which in turn responded by enacting a series of laws that make science still more contestable. The Data Access Act (also known as the Shelby Amendment) allows anyone to demand the disclosure of data, down to the level of laboratory notebooks, from scientists who receive federal funding for their research. No comparable requirement applies to privately sponsored research. The Data Quality Act (or, as it is sometimes called, the “Information Quality Act”) allows the losers in extended regulatory battles to reopen issues resolved by years of deliberation simply by contending that the data underlying such judgments is incorrect.
The courts have increased the opportunities for deconstructing science as well. In a famous opinion called Daubert v. Merrell Dow Pharmaceuticals, Inc., the Supreme Court told judges to screen the credibility of certain research, keeping unreliable science away from juries in some circumstances. This opinion creates an expanded mechanism for adversaries to dissect valid studies and urge their exclusion from the evidence presented at trial, placing the courts in the uncomfortable and undesirable role of arbiter of scientific credibility.
Last but not least, the Office of Management and Budget (OMB), which oversees regulatory policy on behalf of the president, issued a burdensome proposal that would have required peer review by panels that could be biased of the science that supports regulation. Virtually every reputable scientific organization in the country opposed this approach, and OMB was forced to scale it back dramatically.
To halt or at least slow these incursions, major changes are necessary not only to the law, but to scientists’ ability to conduct science without interference. All science used by regulators should be subject to similar scrutiny. If the data underlying publicly funded studies must be made available, data underlying privately funded studies should be subject to the same disclosure. Penalties should be administered for abuses of process, where an interest group deconstructs science merely to obscure the facts and delay regulation. Scientists should be legally protected from harassment, such as unsupported scientific misconduct charges. Underlying these proposals are basic principles for good regulatory science practices. These principles identify basic rules of the road that should guide the creation and use of all science – whether publicly or privately funded.
Principles for Rescuing Science from Politics
Scientists must be able to conduct research without unjustified restrictions, including undue influence by research sponsors.
- Sponsors must never place restrictions on or otherwise influence the design or conduct of a study in an attempt to obtain results favorable to their interests.
- Research must never be suppressed because it produces results that are adverse to a sponsor or other interested party.
- No publication or summary of research should be influenced – in tone or content – by the sponsoring entity. Scientists must be able to conduct and report research without unjustified restrictions.
- If vested interests use the legal system to harass scientists whose research or expert testimony calls into question the safety of their practices or products, the harassers must be held accountable with sanctions and must compensate injured scientists for the resulting interference with their research and potential damage to their reputations.
Researchers and those using their research must be careful to represent their findings accurately, including the limitations of that research. The data and methods of research that inform regulatory decisions must be communicated honestly and expeditiously to the research community and broader public.
- Researchers and those using their data must be honest about the limits of the research and remaining uncertainties. If others misrepresent research to suggest an outcome not supported by the study, researchers must correct these misstatements as soon as they become aware of them.
- Research must never be dismissed or excluded because it does not provide a complete answer to a larger policy or science question. Research, by its nature, is incomplete, and to dismiss research because it does not provide a definitive answer could result in the exclusion of valuable science from regulatory decision making.
- The data and biomaterials underlying a published study, as well as a comprehensive description of the methods, must be available to other scientists and the public at large upon publication of the study or submission of the results to a federal agency, in compliance with prevailing rules for preserving the privacy of human research subjects. Regulatory agencies should rigorously review and challenge exaggerated claims that underlying data must be kept confidential for business reasons.
Government support of independent research is essential to produce discoveries that benefit the public good. In appropriate circumstances, peer review may play an important role in assisting the government’s decision making regarding the use and funding of science, but peer review must never be used to censor research.
- Legitimate scientific peer review does not encompass processes that enable outside stakeholders to pressure scientists to change their views in light of an anticipated policy outcome.
- Peer review should be done by a balanced group of peer reviewers who have no present or past conflicts of interest likely to affect their review and who specialize in the area. Peer reviewers should disclose the limits of their expertise in assessing the research.
- Entities that select peer reviewers should disclose any financial conflicts of interest and affiliations or perspectives that may influence their choice of reviewers. The selection of reviewers must never be politicized.
- Much research that benefits the public good does not generate private compensation for its production. Generous public funding of research is essential for advancements in scientific knowledge, especially in areas where there are no private benefits to be gained from the discoveries.
- All research produced or used by the government should be subject to basic quality-assurance and quality-control checks, especially if that research is not published or disseminated widely within the scientific community.
- Public research monies should be allocated to researchers able to be disinterested, without any finanicial stake in the outcome of the research.