Op-Ed: “Publish or Perish” in the Age of AI: Is Research Integrity at Risk?
Part of a new op-ed series featuring IEOR student voices, this piece is a collaboration between Alberto Gennaro (5th Year PhD), Grace He (3rd Year PhD), Ricky Huang (3rd Year PhD), and Jessica Zhao (1st Year PhD).

Artificial intelligence (AI) is rapidly becoming a tool for academic research. It can draft abstracts, summarize papers, suggest citations, and even generate entire manuscripts. Used carefully, these tools can increase productivity and accessibility. Used carelessly—or under pressure—they risk corroding the foundations of scholarly integrity.
That risk became uncomfortably apparent when over 51 papers submitted to NeurIPS, a major AI conference, were found to contain hallucinated references1. The episode sparked debate not only about AI-generated text, but also about the incentives that made such submissions possible in the first place. This technical failure highlights a fundamental flaw in academic culture: its pervasive “publish or perish” mindset.
The Race to Publish
Modern research careers are increasingly shaped by publication counts, citation metrics, and conference acceptances. Hiring, tenure, funding, and prestige often depend less on the quality or reproducibility of work and more on how frequently one appears in top venues. This incentive structure strains integrity. As a result, researchers partition findings into minimal publishable units, rush incomplete work to meet submission deadlines, or overstate findings to clear competitive review bars.
While AI is not the root cause of these pressures, it dramatically amplifies the damage. When speed is rewarded more than care, tools that generate fluent, authoritative-sounding prose make it much easier to quickly produce polished manuscripts. Moreover, less time spent on carefully writing and formatting leads to more time spent on actual research. Consequently, there are more works being produced faster than ever before.
The Peer Review Crisis
What makes the NeurIPS incident alarming is not only that AI can hallucinate citations, which many AI users are already wary of, but also that such papers could survive long enough to be reviewed, discussed, and even accepted. That should worry us.
Fabricated citations are not trivial errors. References are the backbone of scholarly verification, allowing researchers and readers to trace ideas, check claims, and build cumulative knowledge. When references are erroneous, the scholarly record becomes polluted, quietly and cascadingly. Thus, cross-checking citations of a paper is an integral part of the review process.
Most conferences and journals rely on peer review to decide what gets published. Submissions are often handled under double-blind review, meaning authors and reviewers are anonymous to each other. A small set of two to four volunteer reviewers evaluate each work’s novelty, correctness, clarity, and relevance, usually under tight deadlines. Because review is both unpaid and largely invisible under anonymity, there is little direct incentive—professional, reputational, or otherwise—to spend extra hours on painstaking checks. Moreover, the reviewer pool is completely dominated by the thousands of submissions, requiring volunteers to evaluate a large number of papers at a fast rate. Consequently, careful reference verification is rarely feasible.
Ultimately, the gap between the volume of research produced—now amplified by AI—and the limited capacity for meaningful evaluation through volunteer peer review keeps widening.
AI in Research Needs Governance, Not Denial
The solution is not to ban AI from research. That would be unrealistic and counterproductive. AI tools are already embedded in literature search, data analysis, and writing support—and in many cases they genuinely help researchers, especially those working across languages or disciplines. But the community urgently needs norms and safeguards.
Journals and conferences should require clear disclosure of AI assistance in writing and citation generation. Automated reference-checking tools should become standard, not optional. Review criteria should explicitly reward robustness, transparency, and replication, not just novelty.
Most importantly, academic institutions must confront the incentive structure they enforce. As long as career survival depends on output quantity and venue prestige, researchers—especially early-career scholars—will be pushed toward shortcuts, whether human or machine-assisted.
A Credibility Crisis in Slow Motion
Science depends on trust: trust that citations exist, that experiments were run, that results were not conjured to meet a deadline. AI does not erode that trust on its own. But combined with a hypercompetitive publication economy, it can accelerate a credibility crisis already underway.
The question is not whether AI belongs in research—it already does. The real question is whether academic culture will adapt quickly enough to ensure that intelligence, artificial or human, serves knowledge rather than undermines it.
If we fail to act, fake references will be the least of our problems.
1Shmatko, Nazar, et al. “GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers.” GPTZero, 21 January 2026, https://gptzero.me/news/neurips/.