The adage of ‘publish or perish’ rings increasingly true. The pressure to amass a sizable list of publications can be daunting, often leading to a focus on quantity over quality. The result is a deluge of papers that, while passing the litmus test of peer-review, are glaringly deficient in robust research design and make questionable contributions to the discipline. What’s more distressing is that some of these papers not only fail to advance our collective understanding, but they actually promote misleading narratives, which further muddies the waters of our field (I cannot believe how many media comparison studies still get published). This dilemma is further compounded by the gatekeeping practices of prestigious journals. As an early career faculty member, it is more likely that my research might face a desk rejection, not based on the merit of my work, but due to an editor’s desire to artificially inflate their journal’s rejection rate. The unspoken game of statistics played behind the scenes can stifle genuine, valuable research while promoting an illusion of exclusivity and prestige.
Adding to this, the excessive technical scrutiny applied during the review process can often border on the absurd. Our papers can be sent back because we overlooked pasting the abstract in the fourth different place as per the convoluted and poorly outlined author guidelines. Rather than focusing on the academic merit of a paper, the process can sometimes devolve into a trivial pursuit of format perfection. But this is a topic for another blog article at another time.
As an early career faculty member, I’ve found myself in the midst of this system far too often. As of the writing of this article, I have 19 peer-reviewed articles to my name, another dozen under review, and a handful of proceedings and chapters published. Simply put, I’ve had my fair share of encounters with the current state of peer review in academia. I don’t say this to boast or in any way try to celebrate this fact. If anything my number of publications is emblematic of the very issue. I do however want to point out, that throughout this journey, I’ve observed a glaring disconnect between the quality of reviews, expectations, and the reality of the ‘publish or perish’ mindset. Through this process, one thing has become painfully clear: the feedback I receive seldom pertains to the heart of my actual research. The analyses, the rigor of the study, or my findings rarely seem to merit consideration. It’s as if the foundation of the scientific method has been replaced by a checklist that completely misses the point of academic discourse.
Among the countless reviews I’ve received, there has been only one instance where I was asked to conduct additional analyses. However, even this did not alter the research question or scope; the reviewer was simply curious. Instead, the core of the feedback that I receive is what I would refer to as surface level revisions. In fact my most common revision requests revolve around the following:
- Adding more detail: The irony here is that this information is often content I’ve deliberately left out to comply with the seemingly arbitrary and often meaningless word limits of online journals. One can’t help but question the logic behind stipulating a stringent word count only to allow reviewers to ask for thousands of extra words during the revision process. This problem is so wide reaching that I am at the point where I preemptively create a Google doc to store my surplus content, ready to add it back when the inevitable request for more detail arrives. I sometimes even leave out a section (Conclusion and Limitations) is a go to because it gives reviewers a way of feeling like they provided a meaningful request without actually taking any work to add. Journals need to remove word limits or be okay with details being left out (spoiler: its the word limit that should go)
- Expanding the literature: It’s an open secret that when a reviewer asks you to cite some work, that this often translates cite my work. While this process can sometimes lead to valuable discussions, it frequently feels like a diversion from the main goal of the study. Sometimes I have received some good references from this process, but more often that not it feels like it is less about the research and more about the egos at play. These kinds of revisions often have me adding entire sections that are not relevant to my paper (i.e., I once added 2 pages on brain computer interfaces and adaptive learning technologies to a paper about analyzing engagement of multimedia). To make it clear, I am not against bringing in relevant literature. I am against forcing citations and changes to a paper to ensure it includes somewhat related work that did not inform the study.
- Apparent non-reading of the paper: Unfortunately, this is a far more common occurrence than it should be. The reviewer will suggest changes that make it glaringly apparent that they haven’t fully read or understood the paper. One solution? Delete the relevant section, turn on track changes, and paste it back in. Maybe the reviewer will actually read it now…maybe. I honestly think this is really sad and is disrespectful to the paper and the hard work of the author team.
- Minor edits: Adding different keywords, fixing APA style, etc. These are tasks that can and should be handled during copy editing (better yet, all formatting should be handled in copy editing. Even putting it into APA and formatting the figures. Researchers give their work away for free and provide review for free. Yet we are asked to do the job of for profit editors to meet their arbitrary formatting guidelines. But I again digress) Yet, I’ve been subject to ‘Major Revisions’ for changes that took all of five minutes to implement. The process seems to have lost sight of what a ‘major revision’ actually signifies.
Ask yourself this: when was the last time you read a paper that genuinely pushed boundaries and offered insightful contributions? How often do you find yourself sifting through a stack of papers, many of which add little more than noise to an already crowded space? Equally important, when was the last time you received a review that truly engaged with your research on a deep level? Have you recently given a review that critically examined a paper’s methodology, challenged its hypothesis, and tested its conclusions? We need to remember why we have peer reviews in the first place: to uphold scientific integrity. They’re there to put our research under the microscope, to question our methodologies, to challenge our hypotheses, and to test our conclusions. But when reviews veer away from these fundamental objectives and get mired in trivialities, we need to ask ourselves: What are we even reviewing anymore?
These aren’t rhetorical questions; they’re a call to action. A prompt to reconsider our approach, to ensure we’re not merely playing a numbers game but are genuinely pushing the boundaries of our understanding.