brainbaking/content/post/2021/05/a-critique-on-manuscript-re...

11 KiB
Raw Blame History

title subtitle date tags categories
On Manuscript Review Procedures A Critique and reflection of the process 2021-05-09T08:40:00
collaboration
academia
writing
education

This August, I'll be in academia for three years. I've reflected on education and collaboration, and how to minimize friction by writing papers using just Markdown. Of course, many's the time I've spent the last three years hunched behind a glowing screen, fingers cramped and ready to press the correct buttons. This did not involve parenthesis (coding) as much as it did involve comma's and the word "therefore". As the (in)famous saying goes: publish or perish - this is especially true of a PhD candidate such as myself. I've spent 11 years before that listening and chanting demo or die instead, and especially therein lies the rub.

You see, writing involves reading, writing, and re-writing, primarily based on one very important thing: feedback. In the academic world, feedback is received through what is called a "peer review" process, in which colleague academics, usually anonymously, constructively criticize your work. There are, however, to me, quite a few hick-ups involved in this feedback process. This is not only applicable to academic paper submissions, but also to manuscripts one submits to potential publishers. That said: rant away!

Intro: Code feedback

Programming Digression's Akram Ahmad loves mixing prose with code, and rightfully so declares a tight relationship between coding and a (reinforced) feedback loop:

the more code Ive written up, the more Ive seen thrown into sharp relief a feedback loop at play.

This is where I come from. This is what gets my blood pumping, and this is what my dreams are made out of: Red, Green, Refactor. A nice form of systemic thinking that, as a pre-requirement, indeed includes a reinforced feedback loop. The point of TDD feedback loops, however, is that they are nimble. You know, fast. Small. Speedy in execution. func TestMyMethod(t *testing.T) {}. Boom, there.

Another crucial aspect of such a form of code feedback is the time in which this feedback is received: during development, not after. In other words, during the writing of your manuscript, not after. Or even better, during the setup and development of your scientific experiment, not during or after your write-up of the results.

Problems with manuscript reviews

Apart from the very fact that they are manuscript reviews, and not experiment reviews, I've noticed a few other hitches myself, as I submitted manuscripts over the years and as I was part of reviewing committees myself. Let's start with the most obvious problem.

1. Speed of feedback

This one can be split into two parts: (1) the time-frame between submitting the manuscript and receiving the reviewers' feedback, and (2) the time-frame between the experiment setup and the kick-off of the review process itself.

First, depending on the venue you submit to (being a conference or a journal), these intervals vary a lot. For instance, for our latest journal paper, we had to wait five months. Add another five if they ask for a revision. Colleagues I talk to think that's quite fast. Fast!! By the time something like this ends up in your mailbox, you've long moved on to other experiments, parts, or even projects. The mail is usually accompanied by me mumbling "What kind of thing did we do again?". By the time I process the feedback, I have to invest several days into getting myself re-acquainted with my "old" work. That's not the end of the world, but once that's done, I have to do the same thing again with my new work! Too much context switching overhead.

Secondly, it is not unusual to receive feedback in the form of "your methodology is not up to point, you should have used x and y from paper z, please take a better look and fix this". It's not that this is bad feedback, it's that this is way too late: the experiment has long been done, and most of the time there is simply no way to adhere to these new methods (your interviews are over, your students grew up, ...). We usually end up patching things here and there to the best of our ability, but every single time, this ends with me having a very bad feeling about it all. The research could have been much improved if this feedback was given during the setup and execution, not as a remark on our summary of it.

As said before in agile and academia, iterative methods in the academic world are still a very, very long way off. To me, this is very worrying, and a potential red flag. Still, the problem isn't confined within the boundaries of academia: I had to wait for almost a year to get feedback from publishers after submitting my book manuscript. And that feedback was, compared to the ones I receive from my academic colleagues, complete and utter garbage.

2. Quality and consistency of feedback

One of the perks of the long waiting time is the amount of qualitative feedback you should expect to receive. Sadly, that is not always the case. Especially in conferences, I noticed a distinct lack of consistency in peer review feedback. I'm sure I'm part of the problem, though, as I also attempted to do my best reviewing papers for the first time at a couple of conferences. There are (very) long guidelines with confusing terminology that, to make things worse, differ from venue to venue. In the end, I end up copying the structure and style of others, which might not be the best approach.

Sometimes, literature addition suggestions are a bit questionable in terms of usability. For example, "the authors might find this interesting: title x by author y in venue z" could be a concealed message proclaiming "I am author y and want you to include my work x in your study". With double-blind reviewing, you never know, and yes, I've read reports of this happening: this survey on open peer reviews is very interesting. Of the 85% of respondents who experienced single-blind review, only 52% described it as effective (and it was the preferred option for only 25%). In single-blind, the reviewer stays anonymous, but the author does not, potentially introducing a lot of reviewer bias1. My solution would be to turn these around.

Outro: Some suggestions

In agile software development, pair programming induces a form of social peer pressure, and amplifies the principle of fast feedback. I don't see why pair reviewing wouldn't come with the same benefits. I'd like to propose a few suggestions in the form of ideas stolen from my background as an agile developer which could potentially help in ironing out the above shortcomings.

1. Shorten the feedback loop.

Easier said than done, right? Well, actually, a lot of thought has already gone towards shortening this loop - the next thing is to actually implement it. For example:

  • Provide peer review incentives. The linked article goes into detail on the how and why this could shorten review periods, so I'll be brief: I agree. Don't forget that paper reviews are not part of the academic job description, yet can consume a large portion of your precious time. An incentive thus might involve changing that, instead of just providing a financial bonus.
  • Setup a faculty-wide pre-review group that can review work or even ideas before or during implementation. The group can act as a buffer before sending it out to a venue, and help you get your method straight. It does not prevent reviewers from suggesting to redo the entire thing, but it does help in providing early feedback from a "black box" - a third party, not directly involved in your research.

Papers like "Closing the feedback loop: Ensuring effective action from student feedback" make me chuckle. Why do we try to close that loop when giving feedback to students, but not to our peers? Of course, nothing prevents you from asking feedback to colleagues of other departments/faculties/universities before struggling to get through the official review process.

2. Review in group.

Leverage the swarming/mob programming/pair programming concepts and employ them in the review process. Oh, right, sorry. I meant ensemble programming, "mob" is derogatory nowadays. The linked booklet is very nice, by the way. That brings me to another idea: unify review procedures/instructions for juniors like me who don't know how to review. And please, do not make this a forty-page long document with academic prose.

"Review in group" could mean a couple of things:

  • All reviewers who review in parallel and discuss the work.
  • Reviewers and authors who discuss the work on the spot. See Ensemble Programming's chapter 1: immediate feedback.

Complaints about the practicality of realizing these sessions are non-issues nowadays. Yay for Zoom et al.!

3. Review the review.

This is usually, bot not always, the responsibility of the associate editor or respective program chair. I did however participate in a few review procedures at venues where this was apparently not commonplace, and instead a couple of grades are averaged and summarized, and simply sent out. This makes me wonder if improvements could be made in this area.

The closest resemblance I can think of is code commit reviews - which in its own, are not great, because they still happen too late: the code is already written! But these reviews are reviews of things that should have been peer-reviewed already by a buddy pair programmer. It usually uncovers a few minor but relevant architectural or style-related issues that both programmers might not have thought of.

A review of my review might help me as a reviewer to grow too. I have never received any substantive feedback regarding my reviews, except for "I agree/disagree with reviewer 1", which by itself isn't very useful.

Depending on the reader, the digestibility of this rant might vary. I'm sure many senior academics will brush off most complaints and either say "get used to it" or "I dispute this and that", but you're missing the point. My experience with feedback, its relevance, and its speed, is different. Even after years of trying, I still simply cannot fathom why adjustments haven't been made, especially considering the great body of published work on the subject of improving it!

I'm not sure if writing this has made me feel any better...


  1. There's even a phenomenon called "academic bias" that is closely related to reviewer bias, and of course, in true academic fashion, there are tons of papers published on this subject. Publication is one thing. Policy changes are another. ↩︎