Scientific Integrity in Public Policy
ACS Position Statement
Our nation faces a wide range of complex challenges requiring the timely and efficient formulation of public policy. Accurate and up-to-date scientific and technical information is critically important for developing many public policies. Policy decisions should be informed by people with a variety of skills and perspectives, including the relevant technical expertise.
The American Chemical Society (ACS) strongly supports the use of insightful, comprehensive scientific and engineering input to the development and evaluation of policy options. ACS also encourages the use of scientific integrity policies that help federal, state, and local governments obtain and integrate scientific assessments into policy development and implementation.
Scientific integrity—including the independence of the scientific process and the rigorous application of science-based knowledge—should be upheld throughout all levels of government. Scientists and engineers should provide comprehensive, transparent, unbiased, and understandable technical analyses. Policymakers should consider scientific analyses and relevant technical information in a comprehensive, transparent, and unbiased manner.
As noted in a recent report of the National Academies of Sciences, Engineering, and Medicine,
“The relationship between the research enterprise and the larger society, including policy makers and the public, has become deeper and more complex. Research is implicated in more policy areas with higher stakes, so as science is called upon to inform decision making there is more risk of research being invoked in controversies, misrepresented, or shaped to advance a desired political outcome, contributing to poor decision making and loss of public trust.”
To clarify and strengthen the role of science and the integrity of its use in development of public policy, ACS recommends the following:
Federal, State, and Local Governments
- Government agencies should regularly review and improve their procedures for obtaining and utilizing unbiased scientific and technical input for policy development.
- Government agencies should utilize scientific and technical advisory committees to guide programs. Advisory committees should contain a diversity of technical expertise and opinions, selected from recognized, credible experts in the field from all sectors. Committees should have sufficient diversity to reduce or eliminate conflict of interest concerns for any single member. Employer, professional or political affiliations, and prior policy positions should not preclude anyone from serving on advisory committees. Program leaders are ultimately responsible for weighing the advice of the committee, making decisions, and documenting rationales for decisions made.
- Agencies should clearly and transparently identify what scientific information would be needed to inform their key regulatory issues, and develop frameworks to collect, evaluate, and use that information in a consistent and timely manner, while protecting intellectual property rights, confidential company information and the privacy of personal information.
- Agencies that conduct or fund scientific research should establish and maintain scientific integrity policies that can ensure the objectivity, clarity, and reproducibility of the scientific information, and that provide protection against bias, fabrication, falsification, plagiarism, interference, and censorship.
- Legislative bodies should make use of transparent science, technology and policy analyses performed by qualified professionals in creating effective legislation.
- Legislative committees should seek direct testimony from diverse technical experts on scientific and policy issues,
Scientific Processes and Procedures
- Scientific discourse should be encouraged; such discourse is purposely designed to question what is known and consider various scientific perspectives and interpretations.
- Government agencies should maintain clear conflict of interest policies. Potential conflicts of interest and bias among researchers and other experts involved in policy development and assessment should be handled transparently and fairly.
- Legislative hearings about the science used to inform the crafting of laws and regulatory decisions should be encouraged, because this open dialog will provide the best basis to identify the nature and certainty of knowledge about technical issues.
- Scientists and their institutions should not be burdened unreasonably by extensive or repetitive requests for information and explanation.
Data Quality, Use, Review and Preservation
- Government policy analysts should ensure that scientific input incorporates and references all relevant, peer-reviewed sources.
- Quantitative scientific input with careful uncertainty and sensitivity analyses should be the norm. Conflicting results should be documented and, to the extent possible, quantitatively assessed, evaluated, and reconciled by experts.
- Cross-agency communication is encouraged and should be as transparent as possible.
- Government agencies should have a policy for archiving, protecting, and providing access to scientific data and scientific databases. Science sits on a foundation of observations, tests and analyses that are reproducible, repeated, and verifiable. Conclusions are strengthened by additional observations consistent with the hypothesis, and invalidated by contradictory observations. Preservation of data is critically important for strengthening conclusions, as is transparency about how data are both obtained and used.
Scientific Access and Advice
- Government employed or funded scientists and engineers should be empowered to pursue professional development, present their unclassified research at appropriate technical symposia, and publish in peer-reviewed journals without interference.
- Government scientists should be allowed to discuss their published, peer-reviewed research with the media and the public. When they comment publicly on policy options informed by their research and general technical knowledge, they should clearly state that they are offering their own opinions and not speaking for the government agency.
- When government agencies must prevent their employees, grantees, and/or advisors from commenting publicly on scientific results or policies, restrictions should be transparent and consistently enforced. Appeal processes should be easily available and timely.
[i] Holdren, J. P., Memorandum for the Heads of Executive Departments and Agencies: Scientific Integrity, Office of Science and Technology Policy, December 2010.
[ii] The National Academies Policy on Committee Composition and Balance and Conflicts of Interest for Committees Used in the Development of Reports (2003). Pg. 3-5
Appendix
The content of this appendix is excerpted from The Keystone Center’s Improving the Use of Science in Regulatory Decision-Making: Dealing with Conflict of Interest and Bias in Scientific Advisory Panels, and Improving Systematic Scientific Reviews, A Report from the Research Integrity Roundtable, September 2012, pgs. 5 and 23-26.
TRANSPARENCY AND SELECTION OF SCIENTIFIC REVIEW PANELISTS
Two Central and Paradoxical Pressures
The work of the Roundtable has important implications for persons interested in issues associated with chemicals, energy, land use, natural resources, agriculture, pharmaceuticals, and other areas in which science informs public policy. All Roundtable participants share a common value in preparing this report: they are committed to ensuring the health, safety, and welfare of the public. The related issues of populating science panels with diverse and highly qualified experts and vetting an array of scientific studies must balance tensions between transparency and protecting legitimate personal or corporate interests.
First, for panel formation, a reasonable balance must be established between transparency and privacy. In the realm of qualifications, for example, how much personal information should be revealed to the public by a prospective panelist who may be willing to serve in an advisory capacity, but may not want every aspect of his or her personal life or financial status released to the public?
In dealing with scientific studies, a balance must be established in developing and applying objective and transparent criteria for establishing data relevance and reliability between the desire for complete datasets and the reality that the relevant scientific literature is populated with studies from a wide variety of sources with varying degrees of data availability. In some cases, when proprietary information is involved, an appropriate balance must be struck between the public’s right to know and the legally-based need to protect proprietary formulas, production processes, and related intellectual property.
THE NATIONAL ACADEMIES’ DEFINITIONS OF CONFLICT OF INTEREST AND BIAS[i]
“Conflict of interest refers to: any financial or other interest which conflicts with the service of the individual because it could,1) significantly impair the individual’s objectivity or, 2) could create an unfair competitive advantage for a person or organization....
‘[C]onflict of interest’ means something more than individual bias. There must be an interest, ordinarily financial, that could be directly affected by the work....The term ‘conflict of interest’ applies not only to the personal financial interests of the individual but also to the interests of others with whom the individual has substantial common financial interests if these interests are relevant to the functions to be performed....[A]n individual should not serve as a member of a committee with respect to an activity in which a critical review and evaluation of the individual's own work, or that of his or her immediate employer, is the central purpose of the activity, because that would constitute a conflict of interest, although such an individual may provide relevant information to the program activity.
[B]ias ordinarily relate[s] to views stated or positions taken that are largely intellectually motivated or that arise from the close identification or association of an individual with a particular point of view or the positions or perspectives of a particular group. Potential sources of bias are not necessarily disqualifying for purposes of committee service. Indeed, it is often necessary, in order to ensure that a committee is fully competent, to appoint members in such a way as to represent a balance of potentially biasing backgrounds or professional or organizational perspectives.... Some potential sources of bias, however, may be so substantial that they preclude committee service (e.g., where one is totally committed to a particular point of view and unwilling, or reasonably perceived to be unwilling, to consider other perspectives or relevant evidence to the contrary).’”
SCIENTIFIC INTEGRITY
1. Credibility Assessment
The systematic review should delineate and document specific criteria for assessing the credibility of scientific studies. The criteria are then used to evaluate the relevant studies for credibility, eliminating those that do not pass a meaningful threshold and evaluating the relative credibility of the remaining studies.
The credibility assessment should rely on externally relevant criteria[ii] to the extent possible, to ensure the integrity and standing of the systematic review process in the eyes of the scientific community, stakeholders and the public. In contrast to the relevance assessment where the criteria may be appropriately revisited as new information becomes available, the criteria for credibility should generally be more stable throughout the assessment. Any changes made should be documented.
Some important elements of scientific credibility may include[iii]
- Whether the research objective and design are appropriate;
- Whether the hypothesis or questions (or in more open-ended studies, the approaches) under consideration are clearly stated and testable;
- Whether the study is reproducible, and whether the results have already been replicated;
- Whether the conduct of the study conforms to acceptable standards (e.g., methods used, sample size, time of exposure, Good Laboratory Practices (where applicable), etc.);
- Whether the analysis of data is reasonable, and clearly presented;
- Whether the extrapolations required can be reliably supported by the data; and
- Whether conclusions or applications are supported by the data.
Important examples of non-scientific credibility may include
- Whether funding sources and other competing interests are disclosed;
- Whether the investigators’ own financial conflicts are disclosed;
- Whether the principal investigator has the freedom to publish, authority to analyze and interpret results, and control over study design.[iv] As the Bipartisan Policy Center (BPC) stated, for published studies, “Agencies and scientific advisory committees should be extremely skeptical of a scientific study unless they are sure that the principal investigator(s) (as opposed to the sponsor or funder) had ultimate control over the design and publication of the study.”[v] However, as Conrad and Becker point out, control of study design is not applicable in cases where the design of the study is determined in advance by explicit regulatory agency direction. For example, Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and Toxic Substances Control Act (TSCA) require adherence to test guidelines that prescribe experimental study design elements, and the Organization for Economic Cooperation and Development imposes similar requirements.[vi]
- Whether the study was reviewed independently (e.g., via peer review, or by an appropriate regulatory agency);
- In areas, such as pharmaceuticals, where a public registry of studies has been created, whether the study is (or key test elements are) posted in a relevant public registry; [vii] and
- Whether the data and methods were publicly released.
A study should not be stricken from consideration a priori because of its funding source, but the funding source is a relevant factor in assessing credibility. The systematic review should include establishing and documenting the funder’s involvement with any given study, and any restrictions placed on the study’s release.
2. Weighing Evidence and Drawing Conclusions
The review should delineate in advance how evidence will be weighed, and then document how the evidence is weighed to reach conclusions. Final documentation should articulate the level of uncertainty.
A structured, systematic and transparent framework should be used to assess the overall evidence. This involves an evaluation of the results of the studies from which scientific conclusions can be drawn, integrating the information and rating the strength of the total body of available evidence. Contradictory and negative evidence is also evaluated, weighed, and factored into the conclusion. The systematic review process is based upon the premise that the best understanding is derived not from any single study alone, but rather from the totality of evidence of the most credible studies.
Some important considerations in weighing evidence include
- An appropriate process for integrating each study type and assessing its credibility, with attention to utility, reliability, reproducibility, and consistency where possible;
- A transparent process for considering the number of the various types of studies and, where relevant, sample sizes; and
- The overall consistency of the total body of evidence.
Most well-accepted science is based on a multitude of studies, preferably confirmed by repetition and/ or reproduction. Any one result may be suspect, but confidence rises if that result is independently replicated. Nevertheless, reproducibility is not practical or feasible for all types of studies and varies by field. While laboratory experiments should be repeatable, in other situations, such as ecological studies, replication may not be possible. Reproducibility is an important criterion but not the only criterion for weighing the evidence.
Similarly, confidence in the results of a study is increased when there is consistency of results across independent studies. Likewise, confidence is decreased when results are inconsistent across independent studies.
3. Example: One Framework for Assessing the Totality of Evidence in a Systematic Review for Evaluating Hypotheses of Causality
A systemic review entails looking at the totality of all credible studies, including studies that had negative results, taking into account the quality, strengths and limitations of each study. The conclusions of a systematic review should be based not on any single study alone, but rather on the totality of the evidence; it should not be based on studies of poor or questionable quality. For some fields of science, particularly epidemiology and health risk assessment, the Bradford Hill criteria can be particularly useful for this, as they provide a structured framework to analyze a body of scientific evidence to evaluate hypotheses about causal relationships. Following analysis of the evidence employing each of the criteria, the results are integrated to develop a more complete understanding of the extent to which the totality of the evidence does, or does not, support a hypothesis of cause and effect. This does not mean that all the Hill criteria need to be met to indicate causality, but rather that confidence in causality will be stronger as more criteria are met. Note, however, that the results of the evaluations using the criteria are not absolute proof for or against causation.
Aspects to Evaluate in a Systematic Review to Assist in a Determination of Causality
- Strength of association (relative risk, odds ratio)
- Consistency
- Specificity
- Temporal relationship (temporality) - not heuristic; factually necessary for cause to precede consequence
- Biological gradient (dose-response relationship)
- Plausibility (biological plausibility)
- Coherence
- Experiment (reversibility)
- Analogy (consideration of alternate explanations)
“None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question — is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect.”[viii]
“All scientific work is incomplete — whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have or postpone the action that it appears to demand at a given time.”[ix]
4. Example of a Classification System for Weighing Evidence and Drawing Conclusions
The following example of a classification system is from the 2004 U.S. Surgeon General on Tobacco and Disease that provided a standardized four-level system for describing strength of evidence.
Hierarchy for Classifying Strength of Causal Inferences on the Basis of Available Evidence
- Evidence is sufficient to infer a causal relationship.
- Evidence is suggestive but not sufficient to infer a causal relationship.
- Evidence is inadequate to infer the presence or absence of a causal relationship (evidence that is sparse, of poor quality, or conflicting).
- Evidence is suggestive of no causal relationship.[x]
[i] The Keystone Report citing The National Academies Policy on Committee Composition and Balance and Conflicts of Interest for Committees Used in the Development of Reports (2003) Pg. 3-5.http://www.nationalacademies.org/coi/bi-coi_form-0.pdf
[ii] Externally relevant criteria includes generally accepted scientific practices and principles as described by authoritative sources.
[iii] Conrad, J. and Becker, R., Enhancing Credibility of Chemical Safety Studies: Emerging Consensus on Key Assessment Criteria. Environmental Health Perspectives, Jun. 2011; 119(6). http://ehp.niehs.nih.gov/1002737/
[iv] Conrad, J. and Becker, R. Pg. 760.
[v] Boehlert, S., et al. Science for Policy Project: Improving the Use of Science in Regulatory Policy. The Bipartisan Policy Center. (2009) Pg. 42.http://bipartisanpolicy.org/sites/default/files/BPC%20Science%20Report%20fnl.pdf
[vi] Conrad, J. and Becker, R. Pg. 760.
[vii] This consideration is aspirational for many fields. For example, such registries are not yet extant for the fields of toxicology and epidemiology.
[viii] Bradford Hill, A., “The Environment and Disease: Association or Causation?,” Proceedings of the Royal Society of Medicine, 58 (1965), 295-300.http://www.edwardtufte.com/tufte/hill
[x] Surgeon General’s Report. “Introduction and Approach to Causal Inference.” (2004) http://www.ncbi.nlm.nih.gov/books/NBK44698/