Peer Review is still BROKEN! The NeurIPS 2021 Review Experiment (results are in)
**The Inconsistency of Peer Review**
In the world of academia, peer review is often touted as the gold standard for evaluating the quality and validity of research. However, a closer examination reveals that this process can be downright inconsistent. The randomness of the system has led to some truly astonishing results, which we will delve into below.
**A Look at the Numbers**
Recent studies have shown that if you submit a paper to multiple conferences, the probability of getting accepted varies greatly depending on the conference. In fact, it's estimated that only three out of six papers suggested for an oral presentation by one committee were confirmed by another committee. This is not an isolated incident; similar results have been observed in other areas as well.
**The Accept/Reject Ratio**
When we examine the accept/reject ratio, we see that most papers fall somewhere in the middle – neither extremely good nor extremely bad. However, when we look at extreme cases, it's clear that if you have a really good paper, the probability of getting accepted by another committee is quite high, and if you have a really bad paper, the probability of rejection is even higher.
**The Role of Randomness**
So, what exactly causes this randomness? It seems that the system relies on a combination of factors, including the reputation of the professor submitting the paper, the impact factor of the conference, and even social media influence. The latter point is particularly surprising, as it suggests that the popularity of a paper can be determined by the number of likes and shares it receives online.
**The Impact on Ph.D. Students**
For Ph.D. students, this inconsistency can be devastating. With only three to five conferences to submit to over the course of a year, the stakes are high, and the randomness of the system means that even the best papers can go unnoticed. It's not uncommon for Ph.D. students to spend years trying to get their research published in top-tier journals, only to have it rejected multiple times.
**The Solution**
So, what can be done to address this issue? Some experts suggest that professors should hand out Ph.D.s independently of conference submissions, regardless of the impact factor or reputation of the professor. Others propose that universities should stop considering impact factors when granting tenure, and instead focus on other metrics such as the quality and originality of a researcher's work.
**The Bottom Line**
In conclusion, while peer review is essential for ensuring the quality and validity of research, its current implementation can be inconsistent and even random at times. By acknowledging these flaws and making changes to the system, we can create a more fair and equitable process that rewards good research and provides opportunities for Ph.D. students to succeed.
**The Role of Tenure**
Tenured professors play a significant role in this issue. They are often granted tenure based on their reputation and impact factor, rather than solely on the quality of their research. This can lead to a situation where professors prioritize conferences that have high impact factors over those with lower ones, simply because they offer more prestige.
**Grant Agencies and Conference Influence**
Grant agencies also contribute to this problem by providing funding based on the reputation of the researcher and the conference at which the work is presented. While this may seem fair, it can lead to a situation where researchers prioritize conferences that have high reputations over those with lower ones, simply because they offer more prestige.
**A Solution for Grant Agencies**
One potential solution to this problem is for grant agencies to consider other metrics when evaluating research proposals, such as the originality and impact of the research itself. This would help to reduce the influence of conference reputation and reputation-based funding.
**Conclusion**
In conclusion, while peer review is essential for ensuring the quality and validity of research, its current implementation can be inconsistent and even random at times. By acknowledging these flaws and making changes to the system, we can create a more fair and equitable process that rewards good research and provides opportunities for Ph.D. students to succeed.
**The Final Word**
Ultimately, it's up to us as academics and researchers to take responsibility for creating a better system. We must recognize the flaws in our current implementation of peer review and work together to develop new solutions that prioritize quality and originality over reputation and prestige. Only then can we ensure that research is truly valued and rewarded, regardless of conference or impact factor.