Scholarly peer review

For a broader coverage related to this topic, see Peer review.

Scholarly peer review (also known as refereeing) is the process of subjecting an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal or as a book. The peer review helps the publisher (that is, the editor-in-chief or the editorial board) decide whether the work should be accepted, considered acceptable with revisions, or rejected. Peer review requires a community of experts in a given (and often narrowly defined) field, who are qualified and able to perform reasonably impartial review. Impartial review, especially of work in less narrowly defined or inter-disciplinary fields, may be difficult to accomplish, and the significance (good or bad) of an idea may never be widely appreciated among its contemporaries. Peer review is generally considered necessary to academic quality and is used in most major scientific journals, but does by no means prevent publication of all invalid research. Traditionally, peer reviewers have been anonymous, but there is currently a significant amount of open peer review, where the comments are visible to readers, generally with the identities of the peer reviewers disclosed as well.

History

The first record of an editorial pre-publication peer-review is from 1665 by Henry Oldenburg, the founding editor of Philosophical Transactions of the Royal Society at the Royal Society of London.[1][2][3]

The first peer-reviewed publication might have been the Medical Essays and Observations published by the Royal Society of Edinburgh in 1731. The present-day peer-review system evolved from this 18th-century process,[4] began to involve external reviewers in the mid-19th-century,[5] and did not become commonplace until the mid-20th-century.[6]

Peer review became a touchstone of the scientific method, but until the end of the 19th century only an editor-in-chief or editorial committees performed it.[7][8][9]

Editors of scientific journals made publication decisions without seeking outside input, i.e. an external panel of reviewers, giving established authors latitude in their journalistic discretion. For example, Albert Einstein's four revolutionary Annus Mirabilis papers in the 1905 issue of Annalen der Physik were peer-reviewed by the journal's editor-in-chief, Max Planck, and its co-editor, Wilhelm Wien, both future Nobel prize winners and together experts on the topics of these papers. On another occasion, Einstein was severely critical of the external review process, saying that he had not authorized the editor in chief to show his manuscript "to specialists before it is printed", and informing him that he would "publish the paper elsewhere".[10] While some medical journals started to systematically appoint external reviewers, it is only since the middle of the 20th century that this practice has spread widely and that external reviewers have been given some visibility within academic journals, including being thanked by authors and editors.[7] A 2003 editorial in Nature stated that "in journals in those days, the burden of proof was generally on the opponents rather than the proponents of new ideas.".[11] The journal Nature itself instituted formal peer review only in 1967.[12]

In the 20th century, peer review also became common for science funding allocations. This process appears to have developed independently from that of editorial peer review.[1]:221 Gaudet,[13] provides a social science view of the history of peer review carefully tending to what is under investigation, here peer review, and not only looking at superficial or self-evident commonalities among inquisition, censorship, and journal peer review. It builds on historical research by Gould,[14] Biagioli,[15] Spier,[16] and Rip.[17] The first Peer Review Congress met in 1989. Over time, the fraction of papers devoted to peer review has steadily declined, suggesting that as a field of sociological study, it has been replaced by more systematic studies of bias and errors.[18]

In parallel with 'common experience' definitions based on the study of peer review as a 'pre-constructed process', some social scientists have looked at peer review without considering it as pre-constructed. Hirschauer proposed that journal peer review can be understood as reciprocal accountability of judgements among peers.[19] Gaudet proposed that journal peer review could be understood as a social form of boundary judgement - determining what can be considered as scientific (or not) set against an overarching knowledge system, and following predecessor forms of inquisition and censorship.[13]

Pragmatically, peer review refers to the work done during the screening of submitted manuscripts. This process encourages authors to meet the accepted standards of their discipline and reduces the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views. Publications that have not undergone peer review are likely to be regarded with suspicion by academic scholars and professionals. Non-peer-reviewed work does not contribute, or contributes less, to the academic credit of scholar such as the h-index, although this heavily depends on the field.

Justification

It is difficult for authors and researchers, whether individually or in a team, to spot every mistake or flaw in a complicated piece of work. This is not necessarily a reflection on those concerned, but because with a new and perhaps eclectic subject, an opportunity for improvement may be more obvious to someone with special expertise or who simply looks at it with a fresh eye. Therefore, showing work to others increases the probability that weaknesses will be identified and improved. For both grant-funding and publication in a scholarly journal, it is also normally a requirement that the subject is both novel and substantial.[20][21]

The decision whether or not to publish a scholarly article, or what should be modified before publication, ultimately lies with the publisher (editor-in-chief or the editorial board) to which the manuscript has been submitted. Similarly, the decision whether or not to fund a proposed project rests with an official of the funding agency. These individuals usually refer to the opinion of one or more reviewers in making their decision. This is primarily for three reasons:

Reviewers are often anonymous and independent. However, some reviewers may choose to waive their anonymity, and in other limited circumstances, such as the examination of a formal complaint against the referee, or a court order, the reviewer's identity may have to be disclosed. Anonymity may be unilateral or reciprocal (single- or double-blinded reviewing).

Since reviewers are normally selected from experts in the fields discussed in the article, the process of peer review helps to keep some invalid or unsubstantiated claims out of the body of published research and knowledge. Scholars will read published articles outside their limited area of detailed expertise, and then rely, to some degree, on the peer-review process to have provided reliable and credible research that they can build upon for subsequent or related research. Significant scandal ensues when an author is found to have falsified the research included in an article, as other scholars, and the field of study itself, may have relied upon the invalid research.

For US universities, peer reviewing of books before publication is a requirement for full membership of the Association of American University Presses.[22]

Procedure

In the case of proposed publications, the publisher (editor-in-chief or the editorial board, often with assistance of corresponding ediors) sends advance copies of an author's work or ideas to researchers or scholars who are experts in the field (known as "referees" or "reviewers"), nowadays normally by e-mail or through a web-based manuscript processing system. Depending on the field of study and on the specific journal, there are usually one to three referees for a given article.[23]

These referees each return an evaluation of the work to the publisher, noting weaknesses or problems along with suggestions for improvement. Typically, most of the referees' comments are eventually seen by the author, though a referee can also send 'for your eyes only' comments to the publisher; scientific journals observe this convention almost universally. The publisher, usually familiar with the field of the manuscript (although typically not in as much depth as the referees, who are specialists), then evaluates the referees' comments, her or his own opinion of the manuscript, and the context of the scope of the journal or level of the book and readership, before passing a decision back to the author(s), usually with the referees' comments.[23]

Referees' evaluations usually include an explicit recommendation of what to do with the manuscript or proposal, often chosen from options provided by the journal or funding agency. Most recommendations are along the lines of the following:

During this process, the role of the referees is advisory. The publisher is typically under no obligation to accept the opinions of the referees,[24] though she will most often do so. Furthermore, the referees in scientific publication do not act as a group, do not communicate with each other, and typically are not aware of each other's identities or evaluations. Proponents argue that if the reviewers of a paper are unknown to each other, the publisher can more easily verify the objectivity of the reviews. There is usually no requirement that the referees achieve consensus, with the decision instead often made by the publisher based on her best judgement of the arguments. The group dynamics are thus substantially different from that of a jury.

In situations where multiple referees disagree substantially about the quality of a work, there are a number of strategies for reaching a decision. When a publisher receives very positive and very negative reviews for the same manuscript, the publisher will often solicit one or more additional reviews as a tie-breaker. As another strategy in the case of ties, the publisher may invite authors to reply to a referee's criticisms and permit a compelling rebuttal to break the tie. If a publisher does not feel confident to weigh the persuasiveness of a rebuttal, the publisher may solicit a response from the referee who made the original criticism. A publisher may convey communications back and forth between authors and a referee, in effect allowing them to debate a point. Even in these cases, however, publishers do not allow multiple referees to confer with each other, though each reviewer may often see earlier comments submitted by other reviewers. The goal of the process is explicitly not to reach consensus or to persuade anyone to change their opinions, but instead to provide material for an informed editorial decision. Some medical journals, usually following the open access model, have begun posting on the Internet the pre-publication history of each individual article, from the original submission to reviewers' reports, authors' comments, and revised manuscripts.

Traditionally, reviewers would often remain anonymous to the authors, but this standard varies both with time and with academic field. In some academic fields, most journals offer the reviewer the option of remaining anonymous or not, or a referee may opt to sign a review, thereby relinquishing anonymity. Published papers sometimes contain, in the acknowledgments section, thanks to anonymous or named referees who helped improve the paper.

In some disciplines there exist refereed venues (such as conferences and workshops). To be admitted to speak, scholars and scientists must submit papers (generally short, often 15 pages or less) in advance. These papers are reviewed by a "program committee" (the equivalent of an editorial board), which generally requests inputs from referees. The hard deadlines set by the conferences tend to limit the options to either accepting or rejecting the paper.

Recruiting referees

At a journal or book publisher, the task of picking reviewers typically falls to an editor.[25] When a manuscript arrives, an editor solicits reviews from scholars or other experts who may or may not have already expressed a willingness to referee for that journal or book division. Granting agencies typically recruit a panel or committee of reviewers in advance of the arrival of applications.[26]

Referees are supposed to inform the editor of any conflict of interests that might arise. Journals or individual editors may invite a manuscript's authors to name people whom they consider qualified to referee their work. For some journals this is a requirement of submission. Authors are sometimes also given the opportunity to name natural candidates who should be disqualified, in which case they may be asked to provide justification (typically expressed in terms of conflict of interest).

Editors solicit author input in selecting referees because academic writing typically is very specialized. Editors often oversee many specialties, and can not be experts in all of them. But after an editor selects referees from the pool of candidates, the editor typically is obliged not to disclose the referees' identities to the authors, and in scientific journals, to each other. Policies on such matters differ among academic disciplines. One difficulty with respect to some manuscripts is that, there may be few scholars who truly qualify as experts, people who have themselves done work similar to that under review. This can frustrate the goals of reviewer anonymity and avoidance of conflicts of interest. Low-prestige or local journals and granting agencies that award little money are especially handicapped with regard to recruiting experts.

A potential hindrance in recruiting referees is that they are usually not paid, largely because doing so would itself create a conflict of interest. Also, reviewing takes time away from their main activities, such as his or her own research. To the would-be recruiter's advantage, most potential referees are authors themselves, or at least readers, who know that the publication system requires that experts donate their time. Serving as a referee can even be a condition of a grant, or professional association membership.

Referees have the opportunity to prevent work that does not meet the standards of the field from being published, which is a position of some responsibility. Editors are at a special advantage in recruiting a scholar when they have overseen the publication of his or her work, or if the scholar is one who hopes to submit manuscripts to that editor's publishing entity in the future. Granting agencies, similarly, tend to seek referees among their present or former grantees.

Peerage of Science[27] is an independent service and a community where reviewer recruitment happens via Open Engagement: authors submit their manuscript to the service where it is made accessible for any non-affiliated scientist, and 'validated users' choose themselves what they want to review. The motivation to participate as a peer reviewer comes from a reputation system where the quality of the reviewing work is judged and scored by other users, and contributes to user profiles. Peerage of Science does not charge any fees to scientists, and does not pay peer reviewers. Participating publishers however pay to use the service, gaining access to all ongoing processes and the opportunity to make publishing offers to the authors.

With independent peer review services the author usually retains the right to the work throughout the peer review process, and may choose the most appropriate journal to submit the work to.[28][29] Peer review services may also provide advice or recommendations on most suitable journals for the work. Journals may still want to perform an independent peer review, without the potential conflict of interest that financial reimbursement may cause, or the risk that an author has contracted multiple peer review services but only presents the most favorable one.

An alternative or complementary system of performing peer review is for the author to pay for having it performed. Example of such service provider is Rubriq, which for each work assigns peer reviewers who are financially compensated for their efforts.[30]

Different styles

Anonymous and attributed

For most scholarly publications, the identity of the reviewers is kept anonymised (also called "blind peer review). The alternative, attributed peer review involves revealing the identities of the reviewers. Some reviewers choose to waive their right to anonymity, even when the journal's default format is blind peer review.

In anonymous peer review, reviewers are known to the journal editor or conference organiser but their names are not given to the article's author. In some cases, the author's identity can also be anonymised for the review process, with identifying information is stripped from the document before review. The system is intended to reduce or eliminate bias.[9]

Others support blind reviewing because no research has suggested that the methodology may be harmful and that the cost of facilitating such reviews is minimal.[31] Some experts proposed blind review procedures for reviewing controversial research topics.[32]

In "double-blind" review, which has been fashioned by sociology journals in the 1950s [33] and remains more common in the social sciences and humanities than in the natural sciences, the identity of the authors is concealed from the reviewers, and vice versa, lest the knowledge of authorship or concern about disapprobation from the author bias their review.[34] Critics of the double-blind review process point out that, despite any editorial effort to ensure anonymity, the process often fails to do so, since certain approaches, methods, writing styles, notations, etc., point to a certain group of people in a research stream, and even to a particular person.[35][36]

In many fields of "big science", the publicly available operation schedules of major equipments, such as telescopes or synchrotrons, would make the authors' names obvious to anyone who would care to look them up. Proponents of double-blind review argue that it performs no worse than single-blind, and that it generates a perception of fairness and equality in academic funding and publishing.[37] Single-blind review is strongly dependent upon the goodwill of the participants, but no more so than double-blind review with easily identified authors.

As an alternative to single-blind and double-blind review, authors and reviewers are encouraged to declare their conflicts of interest when the names of authors and sometimes reviewers are known to the other. When conflicts are reported, the conflicting reviewer can be prohibited from reviewing and discussing the manuscript, or his or her review can instead be interpreted with the reported conflict in mind; the latter option is more often adopted when the conflict of interest is mild, such as a previous professional connection or a distant family relation. The incentive for reviewers to declare their conflicts of interest is a matter of professional ethics and individual integrity. Even when the reviews are not public, they are still a matter of record and the reviewer's credibility depends upon how they represent themselves among their peers. Some software engineering journals, such as the IEEE Transactions on Software Engineering, use non-blind reviews with reporting to editors of conflicts of interest by both authors and reviewers.

A more rigorous standard of accountability is known as an audit. Because reviewers are not paid, they cannot be expected to put as much time and effort into a review as an audit requires. Therefore, academic journals such as Science, organizations such as the American Geophysical Union, and agencies such as the National Institutes of Health and the National Science Foundation maintain and archive scientific data and methods in the event another researcher wishes to replicate or audit the research after publication.[38][39][40]

The traditional anonymous peer review has been criticized for its lack of accountability, the possibility of abuse by reviewers or by those who manage the peer review process (that is, journal editors),[41] its possible bias, and its inconsistency,[42] alongside other flaws.[43][44] Eugene Koonin, a senior investigator at the National Center for Biotechnology Information, asserts that the system has "well-known ills"and advocates "open peer review".[45]

Open peer review

Starting in the 1990s, several scientific journals (including the high impact journal Nature in 2006) started experiments with hybrid peer review processes, allowing the open peer reviews in parallel to the traditional model. The initial evidence of the effects of open peer reviews was mixed. Identifying reviewers to the authors does not negatively impact, and may potentially have a positive impact upon, the quality of reviews, the recommendation regarding publication, the tone of the review and the time spent on reviewing. However, more of those who are invited to review decline to do so.[46][47] Informing reviewers that their signed reviews might be posted on the web and available to the wider public did not have a negative impact on quality of reviews and recommendations regarding publication, but it led to a longer time spent on reviewing, besides a higher reviewer decline rate. The results suggest that open peer review is feasible, and does not lead to poorer quality of reviews, but needs to be balanced against the increase in review time, and higher decline rates among invited reviewers.[48]

A number of reputable medical publishers have trialed the open peer review concept. The first open peer review trial was conducted by The Medical Journal of Australia (MJA) in cooperation with the University of Sydney Library, from March 1996 to June 1997. In that study 56 research articles accepted for publication in the MJA were published online together with the peer reviewers' comments; readers could email their comments and the authors could amend their articles further before print publication of the article.[49] The investigators concluded that the process had modest benefits for authors, editors and readers.

Pre- and post-publication

The process of peer review does not end after a paper completes the prepublication peer review process. After being put to press, or having been digitally published, the process of peer review continues as publications are read. Readers will often send letters to the editor of a journal, or correspond with the editor via an on-line journal club. In this way, all 'peers' may offer review and critique of published literature. A variation on this theme is open peer commentary; journals using this process solicit and publish non-anonymous commentaries on the "target paper" together with the paper, and with original authors' reply as a matter of course. The introduction of the "epub ahead of print" practice in many journals has made possible the simultaneous publication of unsolicited letters to the editor together with the original paper in the print issue.

Some journals use postpublication peer review as formal review method, instead of prepublication review. This was first introduced in 2001, by Atmospheric Chemistry and Physics (ACP).[50] More recently F1000Research, ScienceOpen and The Winnower were launched as megajournals with postpublication review as formal review method.[51][52][53] At both ACP and F1000Research peer reviewers are formally invited, much like at prepublication review journals. Articles that pass peer review at those two journals are included in external scholarly databases.[54] In addition to journals hosting their own articles' reviews, there are also external, independent websites dedicated to post-publication peer-review across entire fields, such as PubPeer, Publons, JournalReview.org, etc. The megajournals F1000Research, ScienceOpen and The Winnower publish openly both the identity of the reviewers and the reviewer's report alongside the article.

In 2006, a small group of UK academic psychologists launched Philica, the instant online journal Journal of Everything, to redress many of what they saw as the problems of traditional peer review. All submitted articles are published immediately and may be reviewed afterwards. Any researcher who wishes to review an article can do so and reviews are anonymous. Reviews are displayed at the end of each article, and are used to give the reader criticism or guidance about the work, rather than to decide whether it is published or not. This means that reviewers cannot suppress ideas if they disagree with them. Readers use reviews to guide their reading, and particularly popular or unpopular work is easy to identify.

Result-blind peer review

Studies which report a positive or statistically-significant result are far more likely to be published than ones which do not. A counter-measure to this positivity bias is to hide or make unavailable the results, making journal acceptance more like scientific grant agencies reviewing research proposals. Versions include:

  1. Result-blind peer review or "results blind peer review", first proposed 1966: Reviewers receive an edited version of the submitted paper which omits the results and conclusion section.[55][56][57][58][59] In a two-stage version, a second round of reviews or editorial judgment is based on the full paper version, which was first proposed in 1977.[60]
    Conclusion-blind review, proposed by Robin Hanson in 2007 extends this further asking all authors to submit a positive and a negative version, and only after the journal has accepted the article authors reveal which is the real version.[61]
  2. Pre-accepted articles or "outcome-unbiased journals"/"early acceptance"/"advance publication review"/"registered reports"/"prior to results submission":[62][63][64][65][66][67][68] extends study pre-registration to the point that journals accepted or reject papers based on the version of the paper written before the results or conclusions have been made (an enlarged study protocol), but instead describes the theoretical justification, experimental design, and statistical analysis. Only once the proposed hypothesis and methodology have been accepted by reviewers, the authors would collect the data or analyze previously collected data. A limited variant of a pre-accepted article was the The Lancet's study protocol review from 1997-2015 reviewed and published randomized trial protocols with a guarantee that the eventual paper would at least be sent out to peer review rather than immediately rejected.[69][70]

The following journals used result-blind peer review or pre-accepted articles:

Criticism

Various editors have expressed criticism of peer review.[82][83]

Allegations of bias and suppression

The interposition of editors and reviewers between authors and readers may enable the intermediators to act as gatekeepers.[84] Some sociologists of science argue that peer review makes the ability to publish susceptible to control by elites and to personal jealousy.[85][86] The peer review process may suppress dissent against "mainstream" theories[87][88][89] and may be biased against novelty.[90] Reviewers tend to be especially critical of conclusions that contradict their own views,[91][92] and lenient towards those that match them. At the same time, established scientists are more likely than others to be sought out as referees, particularly by high-prestige journals/publishers. As a result, ideas that harmonize with the established experts' are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones. This accords with Thomas Kuhn's well-known observations regarding scientific revolutions.[93] A theoretical model has been established whose simulations imply that peer review and over-competitive research funding foster mainstream opinion to monopoly.[94]

Criticisms of traditional anonymous peer review allege that it lacks accountability, can lead to abuse by reviewers, and may be biased and inconsistent.[44][95][96]

Failures

Peer review fails when a peer-reviewed article contains fundamental errors that undermine at least one of its main conclusions and that could have been identified by more careful reviewers. Many journals have no procedure to deal with peer review failures beyond publishing letters to the editor.[97]

Peer review in scientific journals assumes that the article reviewed has been honestly prepared. The process occasionally detects fraud, but is not designed to do so.[98] When peer review fails and a paper is published with fraudulent or otherwise irreproducible data, the paper may be retracted.

A 1998 experiment on peer review with a fictitious manuscript found that peer reviewers failed to detect some manuscript errors and the majority of reviewers may not notice that the conclusions of the paper are unsupported by its results.[99]

Fake

There have been instances where peer review was claimed to be performed but in fact was not; this has been documented in some predatory open access journals (e.g., the Who's Afraid of Peer Review? affair) or in the case of sponsored Elsevier journals.

In November 2014, an article in Nature exposed that some academics were submitting fake contact details for recommended reviewers to journals, so that if the publisher contacted the recommended reviewer, they were the original author reviewing their own work under a fake name.[100] The Committee on Publication Ethics issued a statement warning of the fraudulent practice.[101] In March 2015, Biomed Central retracted 43 articles [102] and Springer retracted 64 papers in 10 journals in August 2015.[103]

Plagiarism

Reviewers generally lack access to raw data, but do see the full text of the manuscript, and are typically familiar with recent publications in the area. Thus, they are in a better position to detect plagiarism of prose than fraudulent data. A few cases of such textual plagiarism by historians, for instance, have been widely publicized.[104]

On the scientific side, a poll of 3,247 scientists funded by the U.S. National Institutes of Health found 0.3% admitted faking data and 1.4% admitted plagiarism.[105] Additionally, 4.7% of the same poll admitted to self-plagiarism or autoplagiarism, in which an author republishes the same material, data, or text, without citing their earlier work.[105],

Abuse of inside information by reviewers

A related form of professional misconduct is a reviewer using the not-yet-published information from a manuscript or grant application for personal or professional gain. The frequency with which this happens is unknown, but the United States Office of Research Integrity has sanctioned reviewers who have been caught exploiting knowledge they gained as reviewers. A possible defense for authors against this form of misconduct on the part of reviewers is to pre-publish their work in the form of a preprint or technical report on a public system such as arXiv. The preprint can later be used to establish priority, although preprints violate the stated policies of some journals.

Examples

Further information: SCIgen

Improvement efforts

Efforts to make fundamental improvements have ebbed and flowed since the late 1970s when Rennie first systematically reviewed articles in thirty medical journals. According to Ana Marušić, "Nothing much has changed in 25 years". Mentorship has not been shown to have a positive effect. Worse, little evidence indicates that peer review as presently performed, improves the quality of published papers.[18]

An extension of peer review beyond the date of publication is open peer commentary, whereby expert commentaries are solicited on published articles and the authors are encouraged to respond. It was first implemented by the anthropologist Sol Tax,[112] who founded the journal Current Anthropology, published by University of Chicago Press in 1959. The journal Behavioral and Brain Sciences, published by Cambridge University Press, was founded by Stevan Harnad in 1978[113] and modeled on Current Anthropology's open peer commentary feature.[114] Psycoloquy was founded in 1990[115] on the basis of the same feature, but this time implemented online.

In the summer of 2009, Kathleen Fitzpatrick explored open peer review and commentary in her book, Planned Obsolescence. Throughout the 2000s academic journals based solely on the concept of open peer review were launched, such as Philica.

Early era: 1996–2000

In 1996, the Journal of Interactive Media in Education[116] launched using open peer review.[117] Reviewers' names were made public, they were therefore accountable for their review, and their contribution was acknowledged. Authors had the right of reply, and other researchers had the chance to comment prior to publication. As of February 2013, the "Journal of Interactive Media in Education" stopped using open peer review.[118]

In 1997, the Electronic Transactions on Artificial Intelligence was launched as an open access journal by the European Coordinating Committee for Artificial Intelligence. This journal used a two-stage review process. In the first stage, papers that passed a quick screen by the editors were immediately published on the Transaction's discussion website for the purpose of on-line public discussion during a period of at least three months, where the contributors' names were made public except in exceptional cases. At the end of the discussion period, the authors were invited to submit a revised version of the article, and anonymous referees decided whether the revised manuscript would be accepted to the journal or not, but without any option for the referees to propose further changes. The last issue of this journal appeared in 2001.

In 1999, the open access journal Journal of Medical Internet Research[119] was launched, which from its inception decided to publish the names of the reviewers at the bottom of each published article. Also in 1999, the British Medical Journal[120] moved to an open peer review system, revealing reviewers' identities to the authors but not the readers,[121] and in 2000, the medical journals in the open access BMC series[122] published by BioMed Central, launched using open peer review. As with the BMJ, the reviewers' names are included on the peer review reports. In addition, if the article is published the reports are made available online as part of the 'pre-publication history'.

Several other journals published by the BMJ Group allow optional open peer review,[123] as does PLoS Medicine, published by the Public Library of Science.[124][125] The BMJ's Rapid Responses allows ongoing debate and criticism following publication.[126]

Recent era: 2001–present

Atmospheric Chemistry and Physics (ACP), an open access journal launched in 2001 by the European Geosciences Union, has a two-stage publication process.[50] In the first stage, papers that pass a quick screen by the editors are immediately published on the Atmospheric Chemistry and Physics Discussions (ACPD) website. They are then subject to interactive public discussion alongside formal peer review. Referees' comments (either anonymous or attributed), additional short comments by other members of the scientific community (which must be attributed) and the authors' replies are also published in ACPD. In the second stage, the peer-review process is completed and, if the article is formally accepted by the editors, the final revised papers are published in ACP. The success of this approach is shown by the ranking by Thomson Reuters of ACP as the top journal in the field of Meteorology & Atmospheric Sciences.[127]

In June 2006, Nature launched an experiment in parallel open peer review: some articles that had been submitted to the regular anonymous process were also available online for open, identified public comment. The results were less than encouraging – only 5% of authors agreed to participate in the experiment, and only 54% of those articles received comments.[128][129] The editors have suggested that researchers may have been too busy to take part and were reluctant to make their names public. The knowledge that articles were simultaneously being subjected to anonymous peer review may also have affected the uptake.

In February 2006, the journal Biology Direct was launched by BioMed Central, adding another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, the article will be published. As with Philica, reviewers cannot suppress publication, but in contrast to Philica, no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers.[130] In the social sciences, there have been experiments with wiki-style, signed peer reviews, for example in an issue of the Shakespeare Quarterly.[131]

In 2010, the British Medical Journal began publishing signed reviewer's reports alongside accepted papers, after determining that telling reviewers that their signed reviews might be posted publicly did not significantly affect the quality of the reviews.[132]

In 2011, Peerage of Science, and independent peer review service, was launched with several non-traditional approaches to academic peer review. Most prominently, these include the judging and scoring of the accuracy and justifiability of peer reviews, and concurrent usage of a single peer review round by several participating journals.

Starting in 2013 with the launch of F1000Research, some publishers have combined open peer review with postpublication peer review by using a versioned article system. At F1000Research, articles are published before review, and invited peer review reports (and reviewer names) are published with the article as they come in.[51] Author-revised versions of the article are then linked to the original. A similar postpublication review system with versioned articles is used by Science Open and The Winnower, both launched in 2014.[52][53]

In 2014, Life implanted an open peer review system,[133] under which the peer-review reports and authors’ responses are published as an integral part of the final version of each article.

Another form of "open peer review" is community-based pre-publication peer-review, where the review process is open for everybody to join.

See also

References

  1. 1 2 Rena Steinzor (July 24, 2006). "Rescuing Science from Politics". google.com. Cambridge University Press. p. 304. ISBN 0521855209.
  2. Committee on Science, Engineering, and Public Policy, National Academy of Sciences, National Academy of Engineering, and Institute of Medicine On Being a Scientist: A Guide to Responsible Conduct in Research National Academies Press, Washington, D.C., 1995,82 pages, ISBN 0309119707
  3. The Origin of the Scientific Journal and the Process of Peer Review House of Commons Select Committee Report
  4. Benos, Dale J.; et al. (2007). "The Ups and Downs of Peer Review". Advances in Physiology Education. 31 (2): 145–152. doi:10.1152/advan.00104.2006. PMID 17562902. p. 145 – Scientific peer review has been defined as the evaluation of research findings for competence, significance, and originality by qualified experts. These peers act as sentinels on the road of scientific discovery and publication.
  5. Blow, Nathan S. (January 2015). "Benefits and Burdens of Peer-Review". BioTechniques (editorial). 58 (1). p. 5. doi:10.2144/000114242.
  6. "Benefits and Burdens of Peer-Review". From the Editor. BioTechniques. 58 (1). January 2015. p. 5.
  7. 1 2 Pontille, David; Torny, Didier (2014). "From Manuscript Evaluation to Article Valuation: The Changing Technologies of Journal Peer Review". Human Studies. 38: 57. doi:10.1007/s10746-014-9335-z.
  8. Csiszar, Alex (2016-04-21). "Peer review: Troubled from the start". Nature. 532 (7599): 306–308. doi:10.1038/532306a.
  9. 1 2 Spier, Ray (August 2002). "The history of the peer-review process". Trends in Biotechnology. 20 (8): 357–358. doi:10.1016/S0167-7799(02)01985-6.
  10. Kennefick, Daniel (September 2005). "Einstein Versus the Physical Review". Physics Today. 58 (9): 43–48. Bibcode:2005PhT....58i..43K. doi:10.1063/1.2117822.
  11. "Coping with peer rejection". Nature. 425 (6959): 645. October 16, 2003. Bibcode:2003Natur.425..645.. doi:10.1038/425645a. PMID 14562060.
  12. "History of the journal Nature: Timeline". Macmillan Publishers Limited. 2013. Retrieved 12 November 2013.
  13. 1 2 Joanne Gaudet Investigating journal peer review as scientific object of study:unabridged version – Part I
  14. Gould, T.P.H. (2012). Do We Still Need Peer Review?. The Scarecrow Press.
  15. Biagioli, M. (2002). "From book censorship to academic peer review". Emergences. 12 (1): 11–45. doi:10.1080/1045722022000003435.
  16. Spier, R. (2002). "The history of the peer review process". TRENDS in Biotechnology. 20 (8): 357–358. doi:10.1016/S0167-7799(02)01985-6. PMID 12127284.
  17. Rip, A. (1985). "Commentary: Peer Review is Alive and Well in the United States". Science, Technology, and Human Values. 10 (3): 82–86. doi:10.1177/016224398501000310.
  18. 1 2 Couzin-Frankel, J. (2013). "Secretive and Subjective, Peer Review Proves Resistant to Study". Science. 341 (6152): 1331. doi:10.1126/science.341.6152.1331. PMID 24052283.
  19. Hirschauer, S. (2010). "Editorial judgements: A praxeology of 'voting' in peer review". Social Studies of Science. 40 (1): 71–103. doi:10.1177/0306312709335405.
  20. "Peer Review Panels – Purpose and Process" (PDF). USDA Forest Service. February 6, 2006. Retrieved October 4, 2010.
  21. Sims Gerald K. (1989). "Student Peer Review in the Classroom: A Teaching and Grading Tool" (PDF). Journal of Agronomic Education. 18: 105–108. The review process was double-blind to provide anonymity for both authors and reviewers, but was otherwise handled in a fashion similar to that used by scientific journals
  22. "AAUP Membership Benefits and Eligibility". Association of American University Presses. Retrieved August 3, 2016.
  23. 1 2 Benos, Dale J.; Kirk, Kevin L.; Hall, John E. (2003-06-01). "How to Review a Paper". Advances in Physiology Education. 27 (2): 47–52. doi:10.1152/advan.00057.2002. ISSN 1043-4046. PMID 12760840.
  24. "Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly work in Medical Journals.". ICMJE. 2014-12-16. Retrieved 2015-06-26.
  25. Lawrence O'Gorman (January 2008). "The (Frustrating) State of Peer Review" (PDF). IAPR Newsletter. 30 (1): 3–5.
  26. Schwartz, Samuel M.; Slater, Donald W.; Heydrick, Fred P.; Woolett, Gillian R. (September 1995). "A Report of the AIBS Peer-Review Process for the US Army's 1994 Breast Cancer Initiative". BioScience. 45 (8): 558–563. doi:10.1093/bioscience/45.8.558. JSTOR 1312702.
  27. "Better peer review - Peerage of Science". peerageofscience.org.
  28. Hames, Irene (2014). "The changing face of peer review". Science Editing. 1 (1): 9–12. doi:10.6087/kcse.2014.1.9. ISSN 2288-8063.
  29. Satyanarayana K (2013). "Journal publishing: the changing landscape". Indian J. Med. Res. 138: 4–7. PMC 3767268Freely accessible. PMID 24056548.
  30. Stemmle, Laura; Collier, Keith (2013). "RUBRIQ: tools, services, and software to improve peer review". Learned Publishing. 26 (4): 265–268. doi:10.1087/20130406. ISSN 0953-1513.
  31. J. Scott Armstrong (1982). "Barriers to Scientific Contributions: The Author's Formula" (PDF). Behavioral and Brain Sciences. 5 (2): 197–199. doi:10.1017/S0140525X00011201.
  32. J. Scott Armstrong (1982). "Research on Scientific Journals: Implications for Editors and Authors" (PDF). Journal of Forecasting. 1: 83–104. doi:10.1002/for.3980010109.
  33. Pontille, David; Torny, Didier (2014). "The Blind Shall See! The Question of Anonymity in Journal Peer Review". Ada (4). doi:10.7264/N3542KVW.
  34. Cressey, Daniel (2014). "Journals weigh up double-blind peer review". Nature News. doi:10.1038/nature.2014.15564. Retrieved 15 November 2014.
  35. "Double-blind peer review?". Nature. (subscription required (help)).
  36. "Editorial: Working double-blind". Nature. 451 (7179): 605–6. February 2008. Bibcode:2008Natur.451R.605.. doi:10.1038/451605b. PMID 18256621.
  37. Mainguy, G; Motamedi, MR; Mietchen, D (September 2005). "Peer review—the newcomers' perspective". PLoS Biol. 3 (9): e326. doi:10.1371/journal.pbio.0030326. PMC 1201308Freely accessible. PMID 16149851.
  38. "Policy on Referencing Data in and Archiving Data for AGU Publications". American Geophysical Union. 2012. Retrieved 2012-09-08. The following policy has been adopted for AGU publications in order to ensure that they can effectively and efficiently perform an expanded role in making the underlying data for articles available to researchers now and in the future.
    • This policy was first adopted by the AGU Publications Committee in November 1993 and then revised March 1994, December 1995, October 1996.
    • See also AGU Data Policy by Bill Cook. April 4, 2012.
  39. "Data Management & Sharing Frequently Asked Questions". National Science Foundation. November 30, 2010. Retrieved 2012-09-08.
  40. Reagan W. Moore, Arcot Rajasekar, Michael Wan (2006-03-13). "Data Grids, Digital Libraries, and Persistent Archives: An Integrated Approach to Sharing, Publishing, and Archiving Data" (PDF). Proceedings of the IEEE. 93: 578–588. doi:10.1109/JPROC.2004.842761. Retrieved 2014-02-02.
  41. Bingham C. Peer review and the ethics of internet publishing. In: Hudson Jones A, McLellan F, editors. Ethical Issues in Biomedical Publication. Baltimore: Johns Hopkins University Press, 2000: pages 85-111.
  42. Peter M. Rothwell, Christopher N. Martyn (2000). "Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone?". Brain. 123 (9): 1964–1969. doi:10.1093/brain/123.9.1964. PMID 10960059.
  43. "The Peer Review Process" (PDF). Retrieved 4 January 2012.
  44. 1 2 Alison McCook (February 2006). "Is Peer Review Broken?". The Scientist.
  45. Koonin, Eugene (2006). "Reviving a culture of scientific debate". Nature. doi:10.1038/nature05005.
  46. Van Rooyen, S; Godlee, F; Evans, S; Black, N; Smith, R (1999). "Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial". BMJ. 318 (7175): 23–7. doi:10.1136/bmj.318.7175.23. PMC 27670Freely accessible. PMID 9872878.
  47. Walsh, Elizabeth; Rooney, Maeve; Appleby, Louis; Wilkinson, Greg (2000). "Open peer review: a randomised controlled trial". The British Journal of Psychiatry. 176 (1): 47–51. doi:10.1192/bjp.176.1.47. PMID 10789326.
  48. van Rooyen, Susan; Delamothe, Tony; Evans, Stephen J W (16 November 2010). "Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial.". British Medical Journal. 341: c5729. doi:10.1136/bmj.c5729.
  49. Bingham CM, Higgins G, Coleman R, Van Der Weyden M. The Medical Journal of Australia internet peer review study" The Lancet 1998; 358: 441-445.
  50. 1 2 Pöschl, Ulrich (2012). "Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation". Frontiers in Computational Neuroscience. 6. doi:10.3389/fncom.2012.00033.
  51. 1 2 "Publish First, Ask Questions Later". Wired. July 23, 2013. Retrieved 2015-01-13.
  52. 1 2 "The recipe for our (not so) secret Post-Publication Peer Review sauce!". December 8, 2014. Retrieved 2015-01-13.
  53. 1 2 "The Winnower: An Interview with Josh Nicholson". December 8, 2014. Retrieved 2015-01-13.
  54. "F1000Research peer-reviewed articles now visible on PubMed and PubMed Central". STM Publishing News. December 12, 2013. Retrieved 2015-01-13.
  55. Experimenter Effects in Behavioral Research, Rosenthal 1966, as cited in Walster & Cleary 1970
  56. "Towards a reduction in publication bias", Newcombe 1987
  57. "Improving what is published: A model in search of an editor", Kupfersmid 1988
  58. "Review of publication bias in studies on publication bias: Here's a proposal for editors that may help reduce publication bias", Glymour & Kawachi 2005
  59. "A Two-Step Manuscript Submission Process Can Reduce Publication Bias", Smulders & Yvo 2013
  60. "Publication prejudices: An experimental study of confirmatory bias in the peer review system", Mahoney 1977: "One possible solution might be to ask referees to evaluate the relevance and methodology of an experiment without seeing either its results or their interpretation. While this might be a dramatic improvement, it raises other evaluative problems. How does one deal with the fact that referees may show very little agreement on these topics? Training them might produce better consensus, but consensus is not necessarily unprejudiced."
  61. "Conclusion-Blind Review", 16 January 2007; "Result Blind Review", 6 November 2010; "Who Wants Unbiased Journals?", 27 April 2012
  62. "A Proposal for a New Editorial Policy in the Social Sciences", Walster & Cleary 1970
  63. 1 2 3 "Peer Review for Journals: Evidence on Quality Control, Fairness, and Innovation", Armstrong 1997
  64. "Quality in Epidemiological Research: Should We Be Submitting Papers Before We Have the Results and Submitting More Hypothesis-Generating Research?", Lawlor 2007
  65. "Academic reforms: A four-part proposal", Brendan Nyhan, 16 April 2012; "More on pre-accepted academic articles", 27 April 2012; "Increasing the credibility of political science research: A proposal for journal reforms", Nyhan 2015 (preprint)
  66. "A Proposal for Increasing Evaluation in CS Research Publication", David Karger, 17 February 2011
  67. "It's the incentive structure, people! Why science reform must come from the granting agencies.", Chris Said, 17 April 2012
  68. "Registered reports: a new publishing initiative at Cortex", Chambers 2013; Cortex 2013 guidelines for reviews
  69. "Read it, understand it, believe it, use it: Principles and proposals for a more credible research publication", Green et al 2013, citing "Protocol Review"
  70. "Protocol review at The Lancet: 1997-2015", Editors 2015
  71. "Publishing Standards for Research in Forecasting (Editorial)", Armstrong et al 1986
  72. "Publication of Research on Controversial Topics: The Early Acceptance Procedure", Armstrong 1996
  73. "An Experiment in Publication: Advance Publication Review", Weiss 1989
  74. "Editorial Policies and Publication Bias: The Importance of Negative Studies", Sridharan & Greenland 2009
  75. "Registered Reports", OSF
  76. "Registered Reports: A step change in scientific publishing; Professor Chris Chambers, Registered Reports Editor of the Elsevier journal Cortex and one of the concept's founders, on how the initiative combats publication bias", Chambers, 13 November 2014
  77. "Instead of 'playing the game' it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond", Chambers et al 2014
  78. Nosek, Brian A; Lakens, Daniel (2014). "Registered Reports: A Method to Increase the Credibility of Published Results". Social Psychology. 45 (3): 137–141. doi:10.1027/1864-9335/a000192. Retrieved 20 November 2016.
  79. "Register your study as a new publication option", Science, 15 December 2015
  80. "Psychology's 'registration revolution': Moves to uphold transparency are not only making psychology more scientific - they are harnessing our knowledge of the mind to strengthen science", Guardian, 20 May 2014
  81. "Can Results-Free Review Reduce Publication Bias? The Results and Implications of a Pilot Study", Findley et al 2016
  82. Rennie, D; Flanagin, A; Smith, R; Smith, J (March 19, 2003). "Fifth International Congress on Peer Review and Biomedical Publication: Call for Research". JAMA. 289 (11): 1438. doi:10.1001/jama.289.11.1438.
  83. Horton, Richard (2000). "Genetically modified food: consternation, confusion, and crack-up". MJA. 172 (4): 148–9. PMID 10772580.
  84. Bradley, James V. (1981). "Pernicious Publication Practices". Bulletin of the Psychonomic Society. 18: 31–34. doi:10.3758/bf03333562.
  85. "British scientists exclude 'maverick' colleagues, says report" (2004) EurekAlert Public release date: August 16, 2004
  86. Higgs, Robert (May 7, 2007). "Peer Review, Publication in Top Journals, Scientific Consensus, and So Forth". Independent Institute. Retrieved April 9, 2012.
  87. Martin, Brian (1997). "Suppression Stories". Fund for Intellectual Dissent. Wollongong: Fund for Intellectual Dissent. ISBN 0-646-30349-X.
  88. See also Juan Miguel Campanario, "Rejecting Nobel class articles and resisting Nobel class discoveries", cited in Nature, October 16, 2003, Vol 425, Issue 6959, p.645
  89. Campanario, Juan Miguel; Martin, Brian (Fall 2004). "Challenging dominant physics paradigms". Journal of Scientific Exploration. 18 (3): 421–38. Bibcode:2008atcr.book...11C.
  90. Boudreau, K. J.; Guinan, E. C.; Lakhani, K. R.; Riedl, C. (8 January 2016). "Looking Across and Looking Beyond the Knowledge Frontier: Intellectual Distance, Novelty, and Resource Allocation in Science". Management Science. 62. doi:10.1287/mnsc.2015.2285.
  91. "Malice's Wonderland: Research Funding and Peer Review". Journal of Neurobiology. 14 (2): 95–112. 1983. doi:10.1002/neu.480140202. PMID 6842193. ... they may strongly resist a rival's hypothesis that challenges their own.
  92. Grimaldo, Francisco; Paolucci, Mario (14 March 2013). "A simulation of disagreement for control of rational cheating in peer review". Advances in Complex Systems. 16: 1350004. doi:10.1142/S0219525913500045.
  93. Petit-Zeman, Sophie (January 16, 2003). "Trial by peers comes up short". The Guardian.
  94. Fang, H. (2011). "Peer review and over-competitive research funding fostering mainstream opinion to monopoly". Scientometrics. 87 (2): 293–301. doi:10.1007/s11192-010-0323-4.
  95. Rothwell, P. M.; Martyn, CN (2000). "Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone?". Brain. 123 (9): 1964–9. doi:10.1093/brain/123.9.1964. PMID 10960059.
  96. "Jisc" (PDF). Jisc.
  97. Afifi, M. "Reviewing the "Letter-to-editor" section in the Bulletin of the World Health Organization, 2000–2004". Bulletin of the World Health Organization.
  98. "Peer review is not currently designed to detect deception, nor does it guarantee the validity of research findings." Lee, K. (2006). "Increasing accountability". Nature. doi:10.1038/nature05007.
  99. Baxt, W. G.; Waeckerle, J. F.; Berlin, J. A.; Callaham, M. L. (September 1998). "Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance". Annals of Emergency Medicine. 32 (3 Pt 1): 310–317. doi:10.1016/S0196-0644(98)70006-X. PMID 9737492.
  100. 1 2 Ferguson, Cat; Marcus, Adam; Oransky, Ivan (2014). "Publishing: The peer-review scam". Nature. 515 (7528): 480–482. doi:10.1038/515480a. ISSN 0028-0836.
  101. "COPE statement on inappropriate manipulation of peer review processes". publicationethics.org.
  102. "Inappropriate manipulation of peer review". BioMed Central blog.
  103. Callaway, Ewen (2015). "Faked peer reviews prompt 64 retractions". Nature. doi:10.1038/nature.2015.18202. ISSN 1476-4687.
  104. "History News Network - Historians on the Hot Seat". hnn.us.
  105. 1 2 Weiss, Rick. 2005. Many scientists admit to misconduct: Degrees of deception vary in poll. Washington Post. June 9, 2005. page A03.
  106. Michaels, David (2006). "Politicizing Peer Review: Scientific Perspective". In Wagner, Wendy; Steinzor, Rena. Rescuing Science from Politics: Regulation and the Distortion of Scientific Research. Cambridge University Press. p. 224. ISBN 978-0-521-85520-4.
  107. Soon, Willie; Sallie Baliunas (January 31, 2003). "Proxy climatic and environmental changes of the past 1000 years" (PDF). Climate Research. Inter-Research Science Center. 23: 89–110. doi:10.3354/cr023089.
  108. Tai, M. M. (1994). "A mathematical model for the determination of total area under glucose tolerance and other metabolic curves". Diabetes Care. 17 (2): 152–4. doi:10.2337/diacare.17.2.152. PMID 8137688.
  109. Knapp, Alex (2011). "Apparently, Calculus Was Invented In 1994". Forbes.
  110. Purgathofer, Werner. "Beware of VIDEA!". tuwien.ac.at. Technical University of Vienna. Retrieved April 29, 2014.
  111. Jackson, A. "Peer review – loopholes, hackers and scams". Australian Veterinary Association. Retrieved April 28, 2015.
  112. "Obituary: Sol Tax, Anthropology". Retrieved 2010-10-22.
  113. "Editorial". Behavioral and Brain Sciences. 1. doi:10.1017/S0140525X00059045. Retrieved 2010-10-22.
  114. New Scientist, 20 March 1980, p. 945
  115. Stevan Harnad (1991). "Post-Gutenberg Galaxy: The Fourth Revolution in the Means of Production of Knowledge". Public-Access Computer Systems Review. 2 (1): 39–53. Retrieved 2010-10-22.
  116. "Journal of Interactive Media in Education". Jime.open.ac.uk. Retrieved 4 January 2012.
  117. http://www-jime.open.ac.uk/about.html#lifecycle
  118. "Journal of Interactive Media in Education". open.ac.uk.
  119. "JMIR Home". Jmir.org. Retrieved 4 January 2012.
  120. "bmj.com: BMJ – Helping doctors make better decisions". BMJ. Retrieved 4 January 2012.
  121. "Opening up BMJ peer review – Smith 318 (7175): 4". BMJ. Retrieved 4 January 2012.
  122. "BMC series". Biomedcentral.com. Retrieved 4 January 2012.
  123. Smith. R. (1999). Opening up BMJ peer review. BMJ, 318, 4-5. PMID, online
  124. "Public Library of Science". Plos.org. 28 September 2011. Retrieved 4 January 2012.
  125. "PLoS Medicine: A Peer-Reviewed, Open-Access Journal". Journals.plos.org. 27 March 2009. doi:10.1371/journal.pmed.0030442. Retrieved 4 January 2012.
  126. Delamothe, Tony; Smith, Richard, eds. (May 18, 2002). "Twenty thousand conversations". 324 (7347). BMJ: 1171. doi:10.1136/bmj.324.7347.1171. Retrieved 4 January 2012.
  127. http://www.atmospheric-chemistry-and-physics.net/news_acp_jcr2007_attachment.pdf
  128. Nature. "Overview: Nature's trial of open peer review". Nature.com. Retrieved 4 January 2012.
  129. Nature (21 December 2006). "Peer review and fraud: Article". Nature. Retrieved 4 January 2012.
  130. http://www.biology-direct.com/info/about/
  131. Cohen, Patricia (August 23, 2010). "For Scholars, Web Changes Sacred Rite of Peer Review". The New York Times.
  132. van Rooyen, Susan; Delamothe, Tony; Evans, Stephen JW (2010). "Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial". British Medical Journal. 341: c5729. doi:10.1136/bmj.c5729.
  133. "Editorial". Retrieved 2014-06-29.

Further reading

This article is issued from Wikipedia - version of the 11/23/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.