Leveraging Artificial Intelligence in Scholarly Publishing
DOI:
https://doi.org/10.71426/jassh.v1.i1.pp1-8Keywords:
Generative AI, Scholarly Publishing, Research Integrity, Epistemic Injustice, Posthuman Authorship, Academic Governance, BibliometricsAbstract
The integration of Artificial Intelligence (AI) into scholarly publishing constitutes a structural transformation of historical significance, fundamentally reshaping how knowledge is produced, evaluated, and disseminated. This study presents a systematic analysis of AI adoption within the global research ecosystem, focusing on the critical period from 2021 to late 2025. Using a secondary data analysis framework, the paper examines the dual role of generative AI and large language models (LLMs) as both enablers of unprecedented efficiency and sources of emerging epistemic risk. Drawing on bibliometric evidence, industry reports, and peer-reviewed literature, the analysis reveals a rapid escalation in AI use among researchers, reaching 58% globally in 2025 compared to 37% in 2024. While the literature consistently demonstrates that AI substantially accelerates scholarly workflows—most notably in grant writing, literature synthesis, and preliminary review—it also exposes systemic vulnerabilities, including citation hallucination, opacity in reasoning, and erosion of academic integrity. These risks are compounded by the potential amplification of epistemic injustice, as AI systems trained on dominant linguistic and cultural corpora may marginalize non-Western and non-native English scholarship. The study is guided by two objectives: (i) to evaluate the operational efficacy of AI in streamlining research workflows and (ii) to assess the ethical and institutional implications of emergent “posthuman” authorship. Findings indicate that while AI-assisted tools can reduce grant preparation time by more than 90%, they simultaneously generate non-verifiable citations at rates that threaten the cumulative reliability of the scholarly record. Comparative analysis of detection tools and publisher policies further demonstrates that existing governance mechanisms are fragmented, biased, and insufficient for AI-scale knowledge production. The paper argues that academia is entering a posthuman phase of authorship in which human–machine collaboration destabilizes conventional notions of originality, accountability, and intellectual ownership. Without robust governance frameworks and a redefinition of scholarly integrity, the scientific record risks contamination by machine-generated simulacra of knowledge, undermining trust in research as a public good.
References
[1] [Online Available]: Elsevier. Insights 2024: Attitudes toward AI. 2024. Available from: https://www.elsevier.com/en-in/insights/attitudes-toward-ai
[2] Lodge JM. The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action. Tertiary Education Quality and Standards Agency. 2024 Aug;8. Available from: https://www.teqsa.gov.au
[3] Rybiński K. Automation of grant application writing with the use of ChatGPT. Scientific Papers of Silesian University of Technology. Organization & Management / Zeszyty Naukowe Politechniki Śląskiej. Seria Organizacji i Zarządzanie. 2025 Jan 15;216. Available from: http://dx.doi.org/10.29119/1641-3466.2025.216.30
[4] Goyal A, Tariq MD, Ahsan A, Khan MH, Zaheer A, Jain H, Maheshwari S, Brateanu A. Accuracy of artificial intelligence in meta-analysis: A comparative study of ChatGPT 4.0 and traditional methods in data synthesis. World Journal of Methodology. 2025 Dec 20;15(4):102290. Available from: https://doi.org/10.5662/wjm.v15.i4.102290
[5] Liang W, Zhang Y, Cao H, Wang B, Ding DY, Yang X, Vodrahalli K, He S, Smith DS, Yin Y, McFarland DA. Can large language models provide useful feedback on research papers? A large-scale empirical analysis. NEJM AI. 2024 Jul 25;1(8):AIoa2400196. Available from: https://ai.nejm.org/doi/abs/10.1056/AIoa2400196
[6] Malik S, Naz L, Ghafoor H, Sohail Z, Shaheen R. Artificial intelligence in education: Bridging technology and pedagogy for student-centered outcomes. The Critical Review of Social Sciences Studies. 2025 Sep 9;3(3):2394–2405. Available from: https://doi.org/10.59075/ga8h2c83
[7] Craig C, Kerr I. The death of the AI author. Edward Elgar Publishing eBooks. 2025 Apr 8;52(1):250–285. Available from: https://doi.org/10.4337/9781800887305.00014
[8] [Online Available]: Clarivate. Pulse of the Library 2024. September 19, 2024. Available from: https://exlibrisgroup.com/blog/the-clarivate-pulse-of-the-library-2024-report-is-now-available/
[9] Wiredu JK, Abuba NS, Zakaria H. Impact of generative AI in academic integrity and learning outcomes: A case study in the Upper East region. Asian Journal of Research in Computer Science. 2024 Jul 30;17(8):70–88. Available from: https://doi.org/10.9734/ajrcos/2024/v17i7491
[10] Meakin L. Exploring the impact of generative artificial intelligence on higher education students’ utilization of library resources: A critical examination. Information Technology and Libraries. 2024 Sep 23;43(3). Available from: https://doi.org/10.5860/ital.v43i3.17246
[11] Chauhan C, Currie G. The impact of generative artificial intelligence on research integrity in scholarly publishing. American Journal of Pathology. 2024 Oct 11;194(12):2234–2238. Available from: https://doi.org/10.1016/j.ajpath.2024.10.001
[12] Kay J, Kasirzadeh A, Mohamed S. Epistemic injustice in generative AI. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 2024 Oct 16;7:684–697. Available from: https://doi.org/10.1609/aies.v7i1.31671
[13] Hua Y, Liu F, Yang K, Li Z, Na H, Sheu YH, Zhou P, Moran LV, Ananiadou S, Clifton DA, Beam A. Large language models in mental health care: A scoping review. Current Treatment Options in Psychiatry. 2025 Jul 25;12(1):27. Available from: https://doi.org/10.1007/s40501-025-00363-y
[14] Else H. Tortured phrases give away fabricated papers. Nature. 2021 Aug 19;596:328–329. Available from: https://doi.org/10.1038/d41586-021-02134-0
[15] [Online Available]: Retraction Watch. AI unreliable in identifying retracted research papers, says study. 2025 Nov 19. Available from: https://retractionwatch.com/2025/11/19/ai-unreliable-identifying-retracted-research-papers-study/
[16] Er E, Akçapınar G, Bayazıt A, Noroozi O, Banihashem SK. Assessing student perceptions and use of instructor versus AI-generated feedback. British Journal of Educational Technology. 2025 May;56(3):1074–1091. Available from: https://doi.org/10.1111/bjet.13558
[17] Meyer JG, Urbanowicz RJ, Martin PC, O’Connor K, Li R, Peng PC, Bright TJ, Tatonetti N, Won KJ, Gonzalez-Hernandez G, Moore JH. ChatGPT and large language models in academia: Opportunities and challenges. BioData Mining. 2023 Jul 13;16(1):20. Available from: https://doi.org/10.1186/s13040-023-00339-9
[18] Gao Z, Brantley K, Joachims T. Reviewer2: Optimizing review generation through prompt generation. arXiv preprint arXiv:2402.10886. 2024 Feb 16. Available from: https://doi.org/10.48550/arXiv.2402.10886
[19] Stokel-Walker C, Van Noorden R. What ChatGPT and generative AI mean for science. Nature. 2023 Feb 6;614(7947):214–216. Available from: https://doi.org/10.1038/d41586-023-00340-6
[20] Kalra MP, Mathur A, Patvardhan C. Detection of AI-generated text: An experimental study. In: Proceedings of the 2024 IEEE 3rd World Conference on Applied Intelligence and Computing (AIC); 2024 Jul 27; pp. 552–557. IEEE. Available from: https://ieeexplore.ieee.org/abstract/document/10731116
[21] [Online Available]: Stanford University Human-Centered AI Institute. The 2025 AI Index Report. 2025. Available from: https://hai.stanford.edu/ai-index/2025-ai-index-report
[22] [Online Available]: Jisc National Centre for AI. AI detection and assessment – an update for 2025. 2025 Jun 24. Available from: https://nationalcentreforai.jiscinvolve.org/wp/2025/06/24/ai-detection-assessment-2025/
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Fatima Zahra Ouariach, Khouloud EL Meziani, Soufiane Ouariach (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.