In this blog, a few of times in the recent past, I discussed the peer reviewing process with specific reference to the training, if any, that a reviewer receives for the process: here and here, for example.
DrugMonkey, over at his/her blog, shares his/her experience:
Well I don’t know about “formal” training, but I certainly received some informal training in manuscript review from a postdoctoral mentor. The commenter, Greg Cuppan, has a great point when it comes to grant review.
I am hoping that most readers’ experience with manuscript review is similar to mine. In that during training (certainly as a postdoc) the mentor provides a scaled opportunity for trainees to learn paper reviewing. One approach is simply the journal-club type of approach in which the trainee(s) and mentor read over the manuscript and then meet to discuss strengths and weaknesses. A second approach might be for the mentor to simply assign the trainee to write a review of a manuscript the mentor has received, and then meet so that the mentor can critique the trainee’s review.
[I should note here that I do not consider the sharing of the manuscript with the trainees to be a violation of confidentiality. The trainees, of course, should consider themselves bound to the same confidentiality expected of the assigned reviewer. I can imagine that this runs afoul of the letter of many editorial policies, not sure of the spirit of such policies at all journals. The one journal editor that I know fairly well is actually a major role model in the approach that I am describing here, fwiw.]
Ideally, the mentor then writes the final review and shares this review with the trainee. The trainee can then gain a practical insight into how the mentor chooses to phrase things, which issues are key, which issues not worth mentioning, etc. Over time the mentor might include more and more of the trainees critique in the review and eventually just tell the editor to pass the review formally to the trainee. I is worth saying that it is obligatory mentor behavior, in my view, for the mentor to note the help or participation of a trainee in the comments to editor. Something like “I was ably assisted in this review by my postdoctoral fellow, Dr. Smith”. This is important mentoring by way of introducing your trainee to your scientific community, very similar to the way mentors should introduce their trainees to members of the field at scientific meetings.
I am not sure that “formal” training can do any better than this process and indeed it would run the risk of being so general (I am picturing university-wide or department-wide “training” sessions akin to postdoctoral ethics-in-science sessions) as to be useless.
While I haven’t had any experience with post-doctoral ethics-in-science sessions, I still am not sure why we cannot have a formal training. Here is how I envisage the training: say, I pick a few manuscripts, for which, I also have access to the reviews they received as well as the post-review version of the manuscripts/papers. With these in hand, one can always go through the process that DrugMonkey describes. And, by carefully choosing the manuscripts and reviews, this process can also be used not only to show how to review but also to teach how not to review. By the way, as I noted earlier, PLoS journals which give open access to reviews are also ideal for such a course, though the fact that there is no access to pre-review manuscript does reduce their usefulness a bit.
DrugMonkey, however, do seem to agree that some sort of training for reviewing the project proposals is a good idea:
My view is that this is most emphatically not part of the culture of scientific training, in contrast to the above mentioned points about manuscript review. So I agree with Cuppan that some degree of training in review of grant applications would go far to reduce a certain element of randomness in outcome.
I happen to think it would be a GoodThing if the NIH managed to do some degree of training on grant review. To be fair, they do publish a few documents on the review process and make sure to send those to all reviewers (IME, of course). I tend to think that these documents fall short and wish that individual study sections paid more attention to getting everyone on the same page with respect to certain hot button issues. Like how to deal with R21s. How to really evaluate New Investigators. What criteria for “productivity”, “ambitiousness”, “feasibility”, “significance”, “innovation”, etc are really about for a given section. How to accomplish good score-spreading and “no you do not just happen to have an excellent pile” this round. Should we affirm or resist bias for revised applications?…
Here again access to some sample manuscripts and the reviews they received might be a very good idea; though I do not know of many such proposals and reviews, here is one which was submitted to NFS that is available online.
In summary, I think a formal training for peer review is possible; and, the simple process of making the reviews as well as the pre- and post-review manuscripts available under open access itself will be the ideal way of delivering not only such a training but also a nice way of making the review process more standardised, open, and relatively uniform.
Update: Here is a nice paper by Alan J Smith titled The task of the referee (pdf) which gives detailed instructions; thanks to Siddharth for the pointer.
In Smith’s work, he suggests that subjective assessment of grants is acceptable. That is, judge merit of future work based on prior output even if proposal is sloppy or contains insufficient detail and judge the merit of proposed work based on where one is educated.
Tags: Peer review
March 30, 2008 at 7:12 pm |
I am troubled by Siddarth suggesting the Alan Jay Smith paper: “The task of the referee” as a good reference point for several reasons as mentioned here in my comments on http://brain.brainery.net/mcblog/?p=55.
March 30, 2008 at 8:58 pm |
Dear Cuppan,
Thanks for stopping by and the pointer; I am hoisting the link to your comments on to the post. I have never had any experience with reviewing project proposals, and I do agree with you that irrespective or papers or proposals, the evaluation should strictly be based on the merits of what is on the paper — though, I do not know if that is how it works.
Guru