brainbaking/content/post/2022/08/the-toxic-culture-of-reject...

6.2 KiB

title date categories tags
The Toxic Culture of Rejection in Computing Academia 2022-08-31T13:52:00+02:00
education
academia

Edward Lee wrote a piece at the ACM SIGBED (Special Interest Group on Embedded Systems) blog entitled The Toxic Culture of Rejection in Computer Science that struck a chord here. He starts out with a critical reflection on the rising habit of rejecting rather than accepting academic papers in conference proceedings:

We have come to value as a quality metric for conferences a low acceptance rate. This feeds a culture of shooting each other down rather than growing and nurturing a community. The goal of a PC has become to destroy rather than to develop. Many of our venues are proud of their 10% acceptance rates. Are such low acceptance rates justified?

Most conferences I submit to related to computing education also pride themselves in having low acceptance rates. Although being between 20% and 30% instead of an even more scaring 10%, it's still prevalent. The longer I'm in this strange world of academia, the more it looks like an absurd numbers game: check out that H-index of mine, how about that many million funds raised, hey that conference has a low acceptance rate, it's gotta be good, did you co-author three papers this year?

I wrote about my mixed feelings of the peer review feedback procedure before, and this post seems to further confirm my thoughts. Here's another telling excerpt:

Another problem with the double-blind review process is it creates a position of power with no accountability. Reviewers who will not be identified need not be so sure of their statements. If you have published papers, you certainly have seen criticisms that are arrogant and wrong. But because our papers go to conferences, not journals, your opportunity to respond is limited. Some conferences have a “rebuttal” phase of the review process, but, in my experience, this is a sham and serious dialogue rarely emerges.

Your opportunity to respond is not just "limited"---it's non-existing: at conferences, which are more important compared to journal papers in computer science than in other fields, papers are either accepted or rejected. Sure, you can send an e-mail to the committee after receiving the feedback, but since reviewers were anonymous, there's no way to directly interact with them. The amount of times I've had feedback akin to "thing x is missing", only to have it present but skimped over, is also telling.

Yet the dark picture Lee paints that he generalizes to the whole field of CS doesn't have to be true everywhere. I've been a reviewer for the SIGCSE (Special Interest Group on Computer Science Education) community for years now, and as a Program Committee member, there's always a round of civilized discussion after the double blind review process. Most reviewers, including myself, edit their reviews to take remarks of others into account, and for most papers, we unanimously review whether it's a good or bad one.

Still, that is where my expertise ends: afterwards, all non-obvious rejects are thrown together and hand-picked by another committee, simply because there are too many submissions and only a limited number of slots available. That inevitably means quite a few rejects of very good to excellent papers---something that shouldn't need to be done. Especially with nowadays hybrid conferences that sometimes even allow pre-recorded videos, the "limited (physical) space/time" argument does not stand anymore.

Some CSEd. conferences are timed in a way that if your paper is rejected in conference #1, you simply send it to conference #2. I've had this happen twice: rejected at #1 for various reasons, of which none of the reviewers in conference #2 touched on, where it was accepted. Who to believe then? Many researchers simply attend conferences because they have to present: they do their thing and barely listen to what others have to say.

Out of the 9 conference papers I've published so far, 5 were accepted at first try (3 of them at a lower tier conference), 3 needed a second try, and 1 a third try---we first took a stab at a journal. For us, it means that after a second try, we usually manage to get stuff published. To be honest, I doubt whether that is due to taking feedback of the first attempt into account. It feels more due to the variability of the reviewers or what others sometimes call "sheer luck".

Auto-tendency to reject is appealing for reviewers for another reason:

Another problem is the built-in conflict of interest that arises from the combination of the low acceptance rates with the fact that many (if not most) program committee members also have papers in the pool being considered. They have an extra incentive to reject (or to not champion) papers to improve the chances of their own papers in the same pool.

I admit that during my reviews, I take comments I've had into account and try to evaluate other papers the same way. If I got a lot of remarks in the past regarding a methodology section that didn't describe a certain method well enough, and I have to review a paper that happens to use that same method, you bet I'll complain if they didn't describe that either. If I had to struggle, so should they! Of course that's not entirely correct, but very hard to avoid. Something Edward Lee's article does not mention is a lack of reviewing education: you're most of the time simply thrown into the deep end, sometimes with a barely readable guideline to "help" you.

My boss told me this culture also ties into the concept of applying for funding: a huge amount of time is poured into these overly long and dry documents that is mostly lost effort anyway since very low acceptance rates mean good projects will be rejected because there's too little budget. But if everyone stopped wasting money by putting out these absurd documents and reviewing them, a lot of budget would become available that could be put to better use: by directly distributing it to researchers.

I guess that solution wouldn't satisfy over-engineering loving decision makers.