Arunn at Unruled notebook has a piece on what looks like the abuse of anonymous peer review and another suggesting to make peer review open. Take a look!
Posts Tagged ‘Peer review’
I think I’m a reasonably conscientious reviewer, but I’ve never tried to “[ascertain] the accuracy and validity” of an author’s references in my life. I just assume that either (a) they’re accurate, or (b) if more than a few are inaccurate, the author will get a flea in her ear when the proofing process commences (I’ve been on the receiving end of this). Nor do I imagine that a few iffy referenpces in the bibliography would make me change my mind about the worth of a piece (perhaps if the biblio was obviously hopelessly incompetent, but I suspect that when this happens it is one of a multitude of sins, and bad referencing is likely to be the least of the author’s problems. But am I unique in this – do others scrupulously check the endnotes etc? I suppose that this is hardly a matter of world historical significance, but I’ve always been fascinated by the details of the reviewing process.
The only time I reviewed a paper, I was interested in locating only a couple of the papers from the reference list since (a) they seemed to be of relevance to the understanding of the paper, and (b) I have never read them; otherwise, I also do not feel ‘ascertaining the validity and accuracy of the references’ to be the job of the reviewer.
One thing we talked about was: Who are the most disgruntled people we deal with as editors?
The answer is not the most obvious one: the authors of rejected papers. No, the crankiest people — or perhaps the people who hesitate least about sharing their anger — are those who did a review, recommended rejection or major revisions, and then are angry when they see the paper published, or published without all of their suggested revisions accomplished.
This makes sense. Although some authors of rejected papers do indeed indicate their displeasure in a less than polite way, most do not, perhaps because they hope to publish in that same journal at some later date, or perhaps because they recognize the awesome wisdom of the Editor.
Reviewers, however, have performed a free service for a journal, and may have devoted quite a lot of time to a review. To have that review seemingly ignored can be infuriating. I have felt that way as a reviewer, though I can’t imagine angrily confronting an editor about it.
My editor-colleague has experienced this angry jilted-reviewer situation a lot more than I have, and I was curious about that. He has been an editor longer than I have, and that may be the explanation. Nevertheless, I couldn’t help wondering whether he ignores or disagrees with reviewer recommendations more often than I do. To go against a substantive review and recommendation to reject by a reviewer, I need to be very sure that the reviewer criticisms are unfounded or excessively negative.
It is seldom the case that highly negative reviews are without basis, although it does occur. Some reviewers hate every manuscript they review (but nevertheless provide valuable comments, so editors continue to solicit their reviews); some reviewers are less objective than one ideally hopes a reviewer will be about particular topics; and in some cases a reviewer misunderstands a manuscript (e.g. owing to poor writing by the author or careless reading by the reviewer).
In some cases, one reviewer hates a manuscript but another thinks it is excellent and fascinating. In those situations, at least one of the reviewers is going to be annoyed no matter what the final decision is. That’s not what matters, though. An editor has to take every substantive review seriously, read the ms and reviews carefully, and make a decision, possibly after seeking additional reviews.
Open review (you knew it was coming, didn’t you?) might bring down the number of disgruntled reviewers since it allows those reviewers who rejected the paper a chance to see what made the editor(s) accept the paper in spite of their recommendation.
In this blog, a few of times in the recent past, I discussed the peer reviewing process with specific reference to the training, if any, that a reviewer receives for the process: here and here, for example.
Well I don’t know about “formal” training, but I certainly received some informal training in manuscript review from a postdoctoral mentor. The commenter, Greg Cuppan, has a great point when it comes to grant review.
I am hoping that most readers’ experience with manuscript review is similar to mine. In that during training (certainly as a postdoc) the mentor provides a scaled opportunity for trainees to learn paper reviewing. One approach is simply the journal-club type of approach in which the trainee(s) and mentor read over the manuscript and then meet to discuss strengths and weaknesses. A second approach might be for the mentor to simply assign the trainee to write a review of a manuscript the mentor has received, and then meet so that the mentor can critique the trainee’s review.
[I should note here that I do not consider the sharing of the manuscript with the trainees to be a violation of confidentiality. The trainees, of course, should consider themselves bound to the same confidentiality expected of the assigned reviewer. I can imagine that this runs afoul of the letter of many editorial policies, not sure of the spirit of such policies at all journals. The one journal editor that I know fairly well is actually a major role model in the approach that I am describing here, fwiw.]
Ideally, the mentor then writes the final review and shares this review with the trainee. The trainee can then gain a practical insight into how the mentor chooses to phrase things, which issues are key, which issues not worth mentioning, etc. Over time the mentor might include more and more of the trainees critique in the review and eventually just tell the editor to pass the review formally to the trainee. I is worth saying that it is obligatory mentor behavior, in my view, for the mentor to note the help or participation of a trainee in the comments to editor. Something like “I was ably assisted in this review by my postdoctoral fellow, Dr. Smith”. This is important mentoring by way of introducing your trainee to your scientific community, very similar to the way mentors should introduce their trainees to members of the field at scientific meetings.
I am not sure that “formal” training can do any better than this process and indeed it would run the risk of being so general (I am picturing university-wide or department-wide “training” sessions akin to postdoctoral ethics-in-science sessions) as to be useless.
While I haven’t had any experience with post-doctoral ethics-in-science sessions, I still am not sure why we cannot have a formal training. Here is how I envisage the training: say, I pick a few manuscripts, for which, I also have access to the reviews they received as well as the post-review version of the manuscripts/papers. With these in hand, one can always go through the process that DrugMonkey describes. And, by carefully choosing the manuscripts and reviews, this process can also be used not only to show how to review but also to teach how not to review. By the way, as I noted earlier, PLoS journals which give open access to reviews are also ideal for such a course, though the fact that there is no access to pre-review manuscript does reduce their usefulness a bit.
DrugMonkey, however, do seem to agree that some sort of training for reviewing the project proposals is a good idea:
My view is that this is most emphatically not part of the culture of scientific training, in contrast to the above mentioned points about manuscript review. So I agree with Cuppan that some degree of training in review of grant applications would go far to reduce a certain element of randomness in outcome.
I happen to think it would be a GoodThing if the NIH managed to do some degree of training on grant review. To be fair, they do publish a few documents on the review process and make sure to send those to all reviewers (IME, of course). I tend to think that these documents fall short and wish that individual study sections paid more attention to getting everyone on the same page with respect to certain hot button issues. Like how to deal with R21s. How to really evaluate New Investigators. What criteria for “productivity”, “ambitiousness”, “feasibility”, “significance”, “innovation”, etc are really about for a given section. How to accomplish good score-spreading and “no you do not just happen to have an excellent pile” this round. Should we affirm or resist bias for revised applications?…
Here again access to some sample manuscripts and the reviews they received might be a very good idea; though I do not know of many such proposals and reviews, here is one which was submitted to NFS that is available online.
In summary, I think a formal training for peer review is possible; and, the simple process of making the reviews as well as the pre- and post-review manuscripts available under open access itself will be the ideal way of delivering not only such a training but also a nice way of making the review process more standardised, open, and relatively uniform.
Update: Here is a nice paper by Alan J Smith titled The task of the referee (pdf) which gives detailed instructions; thanks to Siddharth for the pointer.
In Smith’s work, he suggests that subjective assessment of grants is acceptable. That is, judge merit of future work based on prior output even if proposal is sloppy or contains insufficient detail and judge the merit of proposed work based on where one is educated.
I don’t envision the free access system as the status quo but free. Papers would be ranked directly in terms of status and popularity rather than ranked through the journals they are published in. Ultimately there wouldn’t be journals and this would make a big difference as journals are the current carrier of selective incentives and status rewards. It would be easy to refuse to referee, since you wouldn’t fear being shut out of publication of that journal; I suspect refereeing might die. And if status were attached to the individual paper rather than the journal, who would bother to become an editor? It would be a very different world and in some ways more like (academic) blogging than its proponents may wish to think.
In other words, the partial monopolization of for-fee journals makes it possible to produce status returns to motivate both editors and referees. Returning to the free setting, refereeing will survive insofar as writing detailed referee comments on other people’s work helps with your own research; it is interesting to ponder in which fields this might hold.
The interesting bit for me here is Tyler’s suggestion about the implicit incentives for reviewing; that people referee papers for fear of not being able to get published in the journal in question. My personal take on it (as is the take of a number of other people, if this discussion is anything to go by), is a little different. I review not so much because I feel that if I don’t review a paper for journal x that the editors of that journal will look unkindly on me in future, but because of a broad sense that I send papers out that others ought to review, and hence there’s a diffuse obligation on me to review other people’s papers in turn. In other words, I think that the motivating factor is general reciprocity rather than specific reciprocity. Not only that: when I have been on search committees where we are considering people who have been in the field for a few years, I usually check their resumes to see whether they have been reviewers for a few journals. This isn’t so much to figure out what the editors think of them (very often, editors are happy with whoever they can get as a reviewer), as because it seems to me to be the best publicly available proxy for whether the candidate is the kind of person who is likely to take on their share of the unofficial responsibilities that any school or department has.
This isn’t to say that Tyler may not be right when he suggests that an open publication world might not support the kinds of detailed and thoughtful review that we hope for, and sometimes get, in the current system. But I suspect (perhaps wrongly) that the mechanism that would undermine reviewing would primarily be a sociological one rather than an economic one. That is, it would have more to do with the disappearance of the social role of reviewer, and the set of perceived general responsibilities that go with it, than with the opportunities for specific quid-for-quo interactions between reviewer and editor that the current review system lends it to.
Trevino suggests, correctly I think, that journals serve an important function independent from housing articles. Journals signal which papers are worth reading. Sure, you could imagine that other signals might operate in a free access system, but they would likely be tethered with additional sorting problems: information cascades, author prestige would matter more, etc. How would a new scholar, especially one who is at a lower-ranked school, get discovered in such a system? It seems like the costs for searching for high quality papers and for breaking into the system as a new scholar would be much higher in a free access system.
Trevino also addresses the issue of reviewer incentives. She notes that one of the main problems confronting any journal editor is finding good reviewers. A surprising number of scholars simply refuse to review. Like most of us, she thinks reviewing is a professional responsibility. Ethically, you should feel compelled to review papers at journals that you read and to which you submit papers. Unfortunately, a number of people refuse to review (keep in mind that she’s talking about seeking reviewers for AMR, one of the top journals in organizational scholarship; the headaches that editors at lower-ranked journals have are probably much greater).
She notes that there are many good reasons for choosing not to review a paper. You may not read the journal or ever seek to publish in the journal, you may be too closely associated with one of the authors, or the paper may lie outside your area of expertise. These are all good reasons for not reviewing. But some scholars simply don’t review, period. The evidence, then, suggests that, as Tyler thinks, there is at least a minimal reviewer incentive problem. Her solution?
As ethics ombudsperson for the Academy (another Academy role I play), it occurred to me that perhaps we should consider adding a statement of this responsibility to our ethics code and enforce it. It’s fine for someone to “opt out” of reviewing. But, if so, shouldn’t that individual “opt out” of the submission process too? The Academy journals now have a very efficient web-based tracking system, so these individuals are easily identifiable. What do you think? Should we consider enforcing the quid pro quo expectation in some way?
If scholars are going to treat the journal in an instrumental way, refusing to review because they don’t want to bear any of the collective costs of producing a quality journal, shouldn’t the journal have the right to refuse their article submissions as well? Trevino’s suggestion is an interesting one. Notably, she’s taking Tyler’s logic to the extreme. Normative constraints might not be enough; professional responsibilities/ethics may have to be actively enforced.
Open Access Open Review journals where there is an option for scholars to review papers anonymously, pseudonymously or under their own names is the best option, since, the reviewer, if he/she so wishes, can get credit for his/her contributions in such a model, while, those who are afraid of a backlash for the kind of opinions they give in their review can still review without having to give away their identities.
The editor’s role is to evaluate the manuscript’s suitability for publication — does it conform to the journal guidelines? Is it scientifically valid? Does it cite the existing literature appropriately? Do the observations support the claims made? A few manuscripts may be rejected immediately, because they fail to meet basic criteria of scientific value or readability.
Most manuscripts require the editor to seek out the opinions of additional experts in the process of peer review.
Since the paper that Hawks discusses in the post is published in PLoS, you can also have an access to the peer reviews that the paper received (in its original format), here and here. As you can see, the reviews are very different in their conclusions. Here is the anonymous reviewer:
This paper is unacceptable for a variety of reasons and has so many fatal flaws that there are no reasons to ask for revisions. When I accepted the invitation for review, I anticipated a well-reasoned, well-documented manuscript, but this paper is the opposite. I list only a few of the problems, but it is full of citation errors, unreliable correlations, statistical manipulations and lacks essential documentation.
In short, this paper is so full of errors and misinterpretations that it is completely unacceptable. At best they have discovered on Palau some small individuals, but this is not well documented by them and not especially important since they are also found on Flores and other SE Asian islands. It adds little but confusion to the issue of the taxonomic position of the Liang Bua material.
Here is the second reviewer Robert B Eckhardt:
To begin with, I suspect that many of the comments that will be written by others about “Small-bodied humans from Palau, Micronesia” by Berger, Churchill, De Klerk and Quinn will be devoted to critical comments focusing on what the Palau material described here is not: Very likely it will be said by more than a few paleoanthropologists that the Palau sample is not pertinent to tests of hypotheses about the Liang Bua Cave skeletons from Flores, particularly that of the most complete specimen found there, LB1. I would be surprised, in fact, if the majority of the comments on this paper are not negative. Since the beginning late in 2004 of the controversy over the Flores skeletons, my estimate is that roughly 80% of those who consider themselves to be paleoanthropologists think that “Homo floresiensis” is a valid new species of hominin. Judging from the array of papers and posters presented at the 2007 Annual Meeting of the American Association of Physical Anthropologists, this percentage distribution has remained fairly constant for about three years.
Against this background, from the experience of the last several years, members of our own international research group (for membership in which see Berger, et al., reference 4) often have encountered such illuminating scientific comments on our work from Morwood group collaborators as “rubbish” (too often to bother tabulating), and such fascinating morphological assessments as “Robert Eckhardt is thick as a plank,” (Peter Brown, January 2006 Discover magazine [this characterization has been falsified, however, since in a subsequent scientific meeting at the University of Pennsylvania, 10 February 2006, my wife, Carey, used an anthropometer to demonstrate that I am, in fact, thicker than two short planks]). Just to make sure that everyone in the game understood that what sport aficionados refer to as “trash talk” was officially endorsed, Nature (31 August 2006) “warmly welcomed” [their phrase] our group’s detailed paper in Proceedings of the National Academy of Sciences of the United States (PNAS) [reference 4 in Berger, et al.] under the editorial title “Rude paleoanthropology.” Against this background of experience, I suggest that Berger, et al., as well as readers of this journal, be ready for all sorts of attempts at dismissal of the work at hand and its importance. Forewarned is forearmed.
Rather than what it is not, though, we should begin with what the paper by Berger, et al., is; their own words serve just fine in this regard: “We feel that the most parsimonious, and most reasonable, interpretation of the human fossil assemblage from Palau is that they derive from a small-bodied population of H. sapiens (representing either rapid insular dwarfism or a small-bodied colonizing population), and that the primitive traits that they possess reflect either pliotropic [sic] or epigenetic correlates of developmental programs for small body size.” Much of the rest of their paper describes the geographic and temporal settings, plus some detailed, professionally competent, morphological descriptions of the Palau skeletal material. There is no need to repeat those descriptions here, but they are well worth reading, and re-reading.
Interesting, isn’t it?
In any case, there is more in Hawks’ post about the paper, its contents, the reviews it received, and the associated politics; for example, here is Hawks on the comment that the peer review of the paper was influenced by media reports:
Dalton emphasized the media attention to the find, particularly focusing on the role of the National Geographic Society. NGS produced a documentary about Berger’s work on Palau (he is an NGS grantee).
In this case, National Geographic funded the work and apparently produced a documentary about it. Their production wasn’t disclosed to the journal, and I view it as irrelevant to the scientific evaluation of the manuscript.
Paleoanthropologist Tim White is quoted in Dalton’s story, saying that it appears that the “review process [was] driven by popular media.” Since White was not involved in the review process of this paper, he obviously is just speculating.
I tend to give him the benefit of the doubt, since in this story it appears that Dalton was trying to play up any contrary quotes about the findings. Why else would he run otherwise-uninformed comments of the kind in the story?
I would tend instead to ask these questions: Does the Nature Publishing Group (NPG), in publishing Rex Dalton’s piece, have a vested interest in the credibility of their own journals, in comparison to open access outlets like PLoS? Do NPG journals regularly receive manuscripts and publish them based on the associated media attention? Do they have an interest in pressuring grant agencies, like the NGS, into encouraging submission of manuscripts to NPG journals instead of alternate outlets? Does NPG have a well-established record of running stories questioning the value of open access publications?
A very interesting piece; take a look!
With so many academic and science blogs around, one would tend to think that lots of people will be writing about their paper writing experiences (and, more importantly, about the experience of getting it published); however, there aren’t many blog posts I have read which discuss these issues.
Over at orgtheory, Teppo, Brayden and Fabio write, respectively, about finding a home for your not-so-successful papers, about a wiki where you can post your experiences about submitting papers (meant for sociology journals I understand), and, about journal response times and what it tells about your paper.
I love the wiki idea very much; pretty soon, like Amazon and Walmart product pages, I guess you can go to a journal page, read about the experience of the authors who published or did not publish there, and make up your mind about submitting your paper! If people start publishing their papers as well as the reviews that they received in such a wiki, it will be of great use. However, I do not know if the authors can actually publish the reviewers report (since the report, in principle belongs to the reviewer and there is no mechanism by which the author(s) can contact the reviewer and get his/her permission to publish it). This is where open review process would come handy, I suppose.
Though I have seen and read many books, articles, tutorials, and blog posts about technical writing and presentations, the pieces I have read about peer reviewing are few and far in between.
And, Doug at Nanoscale views remedies the situation a bit with his post on reviewing; first, he tells why reviewing is good for you:
First, I’d like someone else out there to do me the same courtesy – well written, thorough, timely referee reports almost always improve the quality of scientific papers. Sometimes it’s just a matter of the referee having fresh eyes and a different perspective; a referee can point out that something which seems obvious to you may not be clear to others. Second, reviewing is a way to keep abreast of what’s going on out there in the community. Third, reading articles and proposals and having to write reviews is intellectually stimulating – it gets me to think about new things and areas that aren’t necessarily my primary interest.
More importantly, Doug goes on to give some suggestions about the report writing too:
When writing a report, I try to produce something that’s actually useful to the authors (as well as the editors in the case of journal articles). I briefly summarize the main points of the paper or proposal to indicate that I’ve actually read it and understand the key ideas. For a paper, I emphasize my overall opinion of the work. Then I point out anything that I found unclear or any parts of the argument that don’t seem supported by the data or calculations, with an eye toward what would improve the manuscript. I rarely reject papers out of hand, since I rarely get manuscripts to review that I think are hopeless (though some are submitted to inappropriate journals). On the other hand, it’s pretty rare that I think something is absolutely flawless (though if my comments are minor I don’t ask to see the paper again). I truly don’t understand why some people submit two-sentence referee reports that are dismissive – this doesn’t help anyone. I also don’t understand why a small number of people can be venomous in reviews. Ok, so you didn’t like the work for some reason – why get nasty? Just explain rationally why you don’t think the paper is right. The point of refereeing is not to fire off insults under the shelter of anonymity – that’s what blogs and internet forums are for.
He also has some interesting stuff to say about refereeing project proposals (like grade inflation). A must read post. Have fun!
There are several things that I have learnt from the reviewing process–about choosing journals, writing papers, and the peer review process itself. I wanted to put them down as a last post in this series. However, one disclaimer is due here. Not all of what I say is relevant to the paper that I reviewed; the paper I reviewed is just the starting point for some of these thoughts.
Choosing a journal
There are several factors that goes into choosing a journal to publish one’s results; for example, Doug at Nanoscale views recently identified few of them: impact factor, wider audience, topicality, and time-to-publish. One way to choose is to identify the key papers on which you are building your work, and see if any of the journals in which the key papers are published is suitable for your purposes. However, sometimes, this simple mechanism does not work. For example, if you are building your work on Eshelby’s 1959 paper in Proceedings of Royal Society of London (PRSL), you obviously can not send your paper to PRSL (except in some rare cases, may be). Also, sometimes, you might be picking an important concept from an area, and apply it to a problem in another; so, the target audience you have in mind is different. In such cases, a great care should be exercised in choosing the journal; here again, the journals that you read regularly, and find interesting stuff is the one to shoot for.
Writing the paper
More importantly, once you have chosen the journal, you have to tune your paper to the journal’s target audience. While a wrong choice can lead to rejection and hence delay in publishing, even a right choice can lead to rejection and/or delay if the presentation style is wrong. What makes this process more difficult is the inhomogeneities across different papers within a single journal, and more often than not, within a single volume. Remember that most of the scholarly publications do not go through Scientific American style editing to make sure that some kind of uniformity is maintained across different papers, and hence, depending on the combination of authors, editors and reviewers (not to mention other extraneous factors such as luck, your pedigree, how well known you or your co-authors are in the field, and, worst of all, even your gender) papers differing in style and content make it through the peer review process.
Once a journal is chosen, sometimes, since most of the journals give you the option of picking potential referees (as well as telling them not to send to specific ones), the temptation and tendency to tune your paper with the referees in mind exists — more often than not, this tendency is shown in the choice of references (not to mention that some referees do demand that all their papers be referred to irrespective of relevance or context) — and it should be avoided; for one thing, it is not always assured that the referees would be chosen from your list; even if they are, as Mary-Claire van Leunen put it in her Handbook so eloquently,
Scholarly writing is distinguished from all other kinds by its punctilious acknowledgment of sources. (…)
For the reader, citation opens the door to further information and to independent judgment. He can find more about your topic, fill in the background, catch up on what he’s missed. He can also judge for himself the use you’ve put your sources to. (…)
Citation keeps you honest.
From this point of view, referring to irrelevant papers and papers which you have not specifically used is as dishonest as not mentioning your sources.
Peer review process
Though I have seen and read many books, articles, tutorials, and blog posts about technical writing and presentations, the pieces I have read about peer reviewing are few and far in between.
The fact that peer reviewing is not open and hence there aren’t many samples available for a closer study also makes it difficult for beginners. From that point of view, making reviews open access, I think, will, to some extent, promote uniformity of the review process, and might even bring about some standardisation in the long run.
In my own case, the tendency to edit was too strong and I had to fight it every step of the way–and, I think, barring noting some of the glaring errors (technical or linguistic), it is not the job of the referee to edit a manuscript, though, giving suggestions to the authors is OK.
Finally, I do not know how well known this is: but, there might be some perks to peer reviewing!
By the way, I have always been a vocal advocate of not only open reviews but also for letting the authors know the name of the referee and her/his affiliations. So, when I completed my review, I was wondering if I would want to put my name and affiliation at the end of the report. I realise that the answer to that question is a conditional yes — yes, if the manuscript as well as the review is out in the open for the entire world to see, and, no, if not.
By far, I found that writing the referee report to be the most time consuming and difficult part of the exercise; there are several reasons for that. First is that I generally tend to take more time to write; the second is that this is the first time I am writing a document of this type–though I have seen some when I got the referee reports for my papers, I was not sure if they are the best samples, nor did I know how much of editing was done to them before it got passed on to me. What I would have liked is to have seen some of the reports that people I respect a lot for their integrity as well as writing skills; unfortunately, the peer review system being what it is, I was not sure if it is a proper thing for me to ask them to show some of their reports. So, as I mentioned in my earlier post, I asked my mentors if they would be willing to read through mine and give me comments and suggestions; fortunately for me they were willing to. And, that saved the day. The third reason for the difficulty is of course the delicateness needed to put forth ones views on paper in a very compelling language but without looking like a jerk. I can only hope I succeeded!
In any case, my report consisted of two sections: a section with critical comments and a recommendation based on the same, and another with some minor corrections and quibbles which would help the authors improve the manuscript. While writing the report, I also kept in mind the tips from the editors of the journal, and tried to answer questions of originality, relevance and presentation.
Though there was an option for me to give some confidential remarks to the editor (which will not be passed on to the authors), both my mentors felt that all of my comments were such that they can be passed on to the authors, and so, that is what I did.