Peer review


Peer review is the evaluation of work by one or more people with similar competencies as the producers of the work. It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs, e.g., [|medical peer review].

Professional

Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is used to inform in decisions related to faculty advancement and tenure. Henry Oldenburg was a German-born British philosopher who is seen as the 'father' of modern scientific peer review.
A prototype professional peer-review process was recommended in the Ethics of the Physician written by Ishāq ibn ʻAlī al-Ruhāwī. He stated that a visiting physician had to make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care.
Professional peer review is common in the field of health care, where it is usually called clinical peer review. Further, since peer review activity is commonly segmented by clinical discipline, there is also physician peer review, nursing peer review, dentistry peer review, etc. Many other professional fields have some level of peer review process: accounting, law, engineering, aviation, and even forest fire management.
Peer review is used in education to achieve certain learning objectives, particularly as a tool to reach higher order processes in the affective and cognitive domains as defined by Bloom's taxonomy. This may take a variety of forms, including closely mimicking the scholarly peer review processes used in science and medicine.

Scholarly

Government policy

The European Union has been using peer review in the "Open Method of Co-ordination" of policies in the fields of active labour market policy since 1999. In 2004, a program of peer reviews started in social inclusion. Each program sponsors about eight peer review meetings in each year, in which a "host country" lays a given policy or initiative open to examination by half a dozen other countries and the relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating "peer countries" submit comments. The results are published on the web.
The United Nations Economic Commission for Europe, through UNECE Environmental Performance Reviews, uses peer review, referred to as "peer learning", to evaluate progress made by its member countries in improving their environmental policies.
The State of California is the only U.S. state to mandate scientific peer review. In 1997, the Governor of California signed into law Senate Bill 1320, Chapter 295, statutes of 1997, which mandates that, before any CalEPA Board, Department, or Office adopts a final version of a rule-making, the scientific findings, conclusions, and assumptions on which the proposed rule are based must be submitted for independent external scientific peer review. This requirement is incorporated into the California Health and Safety Code Section 57004.

Medical

Medical peer review may be distinguished in 4 classifications: 1) clinical peer review; 2) peer evaluation of clinical teaching skills for both physicians and nurses; 3) scientific peer review of journal articles; 4) a secondary round of peer review for the clinical value of articles concurrently published in medical journals.
Additionally, "medical peer review" has been used by the American Medical Association to refer not only to the process of improving quality and safety in health care organizations, but also to the process of rating clinical behavior or compliance with professional society membership standards. Thus, the terminology has poor standardization and specificity, particularly as a database search term.

Technical

In engineering, technical peer review is a type of engineering review. Technical peer reviews are a well defined review process for finding and fixing defects, conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed. Technical peer reviews are held within development phases, between milestone reviews, on completed products or completed portions of products.

Criticism

To an outsider, the anonymous, pre-publication peer review process is opaque. Certain journals are accused of not carrying out stringent peer review in order to more easily expand their customer base, particularly in journals where authors pay a fee before publication. Richard Smith, MD, former editor of the British Medical Journal, has claimed that peer review is "ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant; Several studies have shown that peer review is biased against the provincial and those from low- and middle-income countries; Many journals take months and even years to publish and the process wastes researchers' time. As for the cost, the Research Information Network estimated the global cost of peer review at £1.9 billion in 2008."
In addition, Australia's Innovative Research Universities group has found that "peer review disadvantages researchers in their early careers, when they rely on competitive grants to cover their salaries, and when unsuccessful funding applications often mark the end of a research idea".

Low-end distinctions in articles understandable to all peers

argues that since the exams and other tests that people pass on their way from "layman" to "expert" focus on answering the questions in time and in accordance with a list of answers, and not on making precise distinctions, there is as much individual variation in the ability to distinguish causation from correlation among "experts" as there is among "laymen". Ioannidis argues that as a result, scholarly peer review by many "experts" allows only articles that are understandable at a wide range of cognitive precision levels including very low ones to pass, biasing publications towards favoring articles that infer causation from correlation while mislabelling articles that make the distinction as "incompetent overestimation of one's ability" on the side of the authors because some of the reviewing "experts" are cognitively unable to distinguish the distinction from alleged rationalization of specific conclusions. It is argued by Ioannidis that this makes peer review a cause of selective publication of false research findings while stopping publication of rigorous criticism thereof, and that further post-publication review repeats the same bias by selectively retracting the few rigorous articles that may have made it through initial pre-publication peer review while letting the low-end ones that confuse correlation and causation remain in print.

Peer review and trust

Researchers have peer reviewed manuscripts prior to publishing them in a variety of ways since the 18th century. The main goal of this practice is to improve the relevance and accuracy of scientific discussions. Even though experts often criticize peer review for a number of reasons, the process is still often considered the "gold standard" of science. Occasionally however, peer review approves studies that are later found to be wrong and rarely deceptive or fraudulent results are discovered prior to publication. Thus, there seems to be an element of discord between the ideology behind and the practice of peer review. By failing to effectively communicate that peer review is imperfect, the message conveyed to the wider public is that studies published in peer-reviewed journals are "true" and that peer review protects the literature from flawed science. A number of well-established criticisms exist of many elements of peer review. In the following we describe cases of the wider impact inappropriate peer review can have on public understanding of scientific literature.
Multiple examples across several areas of science find that scientists elevated the importance of peer review for research that was questionable or corrupted. For example, climate change deniers have published studies in the Energy and Environment journal, attempting to undermine the body of research that shows how human activity impacts the Earth's climate. Politicians in the United States who reject the established science of climate change have then cited this journal on several occasions in speeches and reports.
At times, peer review has been exposed as a process that was orchestrated for a preconceived outcome. The New York Times gained access to confidential peer review documents for studies sponsored by the National Football League that were cited as scientific evidence that brain injuries do not cause long-term harm to its players. During the peer review process, the authors of the study stated that all NFL players were part of a study, a claim that the reporters found to be false by examining the database used for the research. Furthermore, The Times noted that the NFL sought to legitimize the studies" methods and conclusion by citing a "rigorous, confidential peer-review process" despite evidence that some peer reviewers seemed "desperate" to stop their publication. Recent research has also demonstrated that widespread industry funding for published medical research often goes undeclared and that such conflicts of interest are not appropriately addressed by peer review.
Another problem that peer review fails to catch is ghostwriting, a process by which companies draft articles for academics who then publish them in journals, sometimes with little or no changes. These studies can then be used for political, regulatory and marketing purposes. In 2010, the US Senate Finance Committee released a report that found this practice was widespread, that it corrupted the scientific literature and increased prescription rates. Ghostwritten articles have appeared in dozens of journals, involving professors at several universities.
Just as experts in a particular field have a better understanding of the value of papers published in their area, scientists are considered to have better grasp of the value of published papers than the general public and to see peer review as a human process, with human failings, and that "despite its limitations, we need it. It is all we have, and it is hard to imagine how we would get along without it". But these subtleties are lost on the general public, who are often misled into thinking that published in a journal with peer review is the "gold standard" and can erroneously equate published research with the truth. Thus, more care must be taken over how peer review, and the results of peer reviewed research, are communicated to non-specialist audiences; particularly during a time in which a range of technical changes and a deeper appreciation of the complexities of peer review are emerging. This will be needed as the scholarly publishing system has to confront wider issues such as retractions and replication or reproducibility "crisis'.

Views of peer review

Peer review is often considered integral to scientific discourse in one form or another. Its gatekeeping role is supposed to be necessary to maintain the quality of the scientific literature and avoid a risk of unreliable results, inability to separate signal from noise, and slow scientific progress.
Shortcomings of peer review have been met with calls for even stronger filtering and more gatekeeping. A common argument in favor of such initiatives is the belief that this filter is needed to maintain the integrity of the scientific literature.
Calls for more oversight have at least two implications that are counterintuitive of what is known to be true scholarship.
  1. The belief that scholars are incapable of evaluating the quality of work on their own, that they are in need of a gatekeeper to inform them of what is good and what is not.
  2. The belief that scholars need a "guardian" to make sure they are doing good work.
Others argue that authors most of all have a vested interest in the quality of a particular piece of work. Only the authors could have, as Feynman puts it, the "extra type of integrity that is beyond not lying, but bending over backwards to show how you're maybe wrong, that you ought to have when acting as a scientist." If anything, the current peer review process and academic system could penalize, or at least fail to incentivize, such integrity.
Instead, the credibility conferred by the "peer-reviewed" label could diminish what Feynman calls the culture of doubt necessary for science to operate a self-correcting, truth-seeking process. The effects of this can be seen in the ongoing replication crisis, hoaxes, and widespread outrage over the inefficacy of the current system. It's common to think that more oversight is the answer, as peer reviewers are not at all lacking in skepticism. But the issue is not the skepticism shared by the select few who determine whether an article passes through the filter. It is the validation, and accompanying lack of skepticism, that comes afterwards. Here again more oversight only adds to the impression that peer review ensures quality, thereby further diminishing the culture of doubt and counteracting the spirit of scientific inquiry.
Quality research - even some of our most fundamental scientific discoveries - dates back centuries, long before peer review took its current form. Whatever peer review existed centuries ago, it took a different form than it does in modern times, without the influence of large, commercial publishing companies or a pervasive culture of publish or perish. Though in its initial conception it was often a laborious and time-consuming task, researchers took peer review on nonetheless, not out of obligation but out of duty to uphold the integrity of their own scholarship. They managed to do so, for the most part, without the aid of centralised journals, editors, or any formalised or institutionalised process whatsoever. Supporters of modern technology argue that it makes it possible to communicate instantaneously with scholars around the globe, make such scholarly exchanges easier, and restore peer review to a purer scholarly form, as a discourse in which researchers engage with one another to better clarify, understand, and communicate their insights.
Such modern technology includes posting results to preprint servers, preregistration of studies, open peer review, and other open science practices. In all these initiatives, the role of gatekeeping remains prominent, as if a necessary feature of all scholarly communication, but critics argue that a proper, real-world implementation could test and disprove this assumption; demonstrate researchers' desire for more that traditional journals can offer; show that researchers can be entrusted to perform their own quality control independent of journal-coupled review. Jon Tennant also argues that the outcry over the inefficiencies of traditional journals centers on their inability to provide rigorous enough scrutiny, and the outsourcing of critical thinking to a concealed and poorly-understood process. Thus, the assumption that journals and peer review are required to protect scientific integrity seems to undermine the very foundations of scholarly inquiry.
To test the hypothesis that filtering is indeed unnecessary to quality control, many of the traditional publication practices would need to be redesigned, editorial boards repurposed if not disbanded, and authors granted control over the peer review of their own work. Putting authors in charge of their own peer review is seen as serving a dual purpose. On one hand, it removes the conferral of quality within the traditional system, thus eliminating the prestige associated with the simple act of publishing. Perhaps paradoxically, the removal of this barrier might actually result in an increase of the quality of published work, as it eliminates the cachet of publishing for its own sake. On the other hand, readers know that there is no filter so they must interpret anything they read with a healthy dose of skepticism, thereby naturally restoring the culture of doubt to scientific practice.
In addition to concerns about the quality of work produced by well-meaning researchers, there are concerns that a truly open system would allow the literature to be populated with junk and propaganda by those with a vested interest in certain issues. A counterargument is that the conventional model of peer review diminishes the healthy skepticism that is a hallmark of scientific inquiry, and thus confers credibility upon subversive attempts to infiltrate the literature. Allowing such "junk" to be published could make individual articles less reliable but render the overall literature more robust by fostering a "culture of doubt".
One initiative experimenting in this area is Researchers.One, a non-profit peer review publication platform featuring a novel author-driven peer review process. Other similar examples include the Self-Journal of Science, PRElights, and The Winnower, which do not yet seem to have greatly disrupted the traditional peer review workflow. Supporters conclude that researchers are more than responsible and competent enough to ensure their own quality control; they just need the means and the authority to do so.