Volume 17 - Supplement of 11th Annual Iranian Congress of Medical Ethics                   IJMEHM 2024, 17 - Supplement of 11th Annual Iranian Congress of Medical Ethics : 1-4 | Back to browse issues page

XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Kazemi A. Survey of COPE, ICMJE, CIOMS, and WAME Statements/Opinions on the Use of AI in Scientific Publications. IJMEHM 2024; 17 (S1) :1-4
URL: http://ijme.tums.ac.ir/article-1-6939-en.html
Bio - Medical Ethics Fellowship; Philosophy and History Research Center. Tabriz University of Medical Sciences, Tabriz, Iran
Abstract:   (1183 Views)
Journals have begun publishing articles in which chatbots, such as Bard, Bing, and ChatGPT, have been utilized, with some even listing chatbots as co-authors. The legal status of authorship varies by country, but under most jurisdictions, an author must be a legal person. Chatbots do not meet the International Committee of Medical Journal Editors (ICMJE) authorship criteria, particularly requirements such as giving “final approval of the version to be published” and being “accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.” No AI tool can “understand” a conflict-of-interest statement or sign it, nor do chatbots have independent affiliations apart from their developers. Since authors submitting a manuscript must ensure that all listed authors meet the required criteria, chatbots cannot be considered authors. Authors should disclose the use of chatbots and provide detailed information about how they were employed. The extent and type of chatbot usage in journal publications should be clearly indicated, in line with the ICMJE recommendation to acknowledge writing assistance and detail the study's methods and results. When chatbots or AI tools are used to draft new text, authors must note such use in the acknowledgments. All prompts employed to generate text, convert text into tables or illustrations, or draft figures should be specified. If an AI tool was used for analytical work, reporting results (e.g., generating tables or figures), or writing computer codes, this should be explicitly stated in the paper’s Abstract and Methods sections. For transparency and reproducibility, authors should include the complete prompt used to generate results, the query’s time and date, and details of the AI tool, including its version. Authors remain fully responsible for material generated by a chatbot, including its accuracy and the absence of plagiarism. They must also ensure appropriate attribution of all sources, including original sources for content produced by the chatbot. Authors must confirm that the work reflects their data and ideas and is free from plagiarism, fabrication, or falsification. Otherwise, submitting such material for publication constitutes scientific misconduct. Proper attribution of quoted material, with full citations, is essential, and cited sources must align with the chatbot’s claims. Since chatbots may omit sources opposing the viewpoints in their output, it is the author’s responsibility to identify, review, and include such counterviews in their articles. (It is worth noting that biases are not exclusive to AI; human authors are also subject to them.) Editors and peer reviewers should disclose any use of chatbots in manuscript evaluation or correspondence. If they employ chatbots in communications with authors or colleagues, they must clarify how the chatbot was used. Editors and reviewers are responsible for any content and citations generated by chatbots. They should also be mindful that chatbots may retain the prompts and manuscript content provided to them, which could breach the confidentiality of submitted manuscripts. Authors must specify the chatbot used and the exact prompts (query statements) employed. They should detail steps taken to mitigate the risks of plagiarism, provide balanced perspectives, and ensure the accuracy of all references. Editors require effective tools to detect content generated or modified by AI. These tools should be universally accessible, regardless of financial constraints, to uphold scientific integrity and minimize the risk of misinformation that could adversely affect public health. Many medical journal editors currently rely on manuscript evaluation approaches that are not designed to address AI-related challenges, such as manipulated or plagiarized text, fabricated images, and papermill-generated documents. This puts them at a disadvantage when distinguishing legitimate from fabricated content, and the emergence of chatbots exacerbates these challenges. Access to advanced tools that enable efficient and accurate content evaluation is particularly vital for editors of medical journals, where misinformation can have severe consequences, including harm to patients.
 
Full-Text [PDF 412 kb]   (280 Downloads)    
Type of Study: Oral Presentation | Subject: Health Ethics Congress (11th) - Oral Presentation
Received: 2025/06/2 | Revised: 2026/01/12 | Accepted: 2024/12/22 | Published: 2024/12/22

Add your comments about this article : Your username or Email:
CAPTCHA

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2026 , Tehran University of Medical Sciences, CC BY-NC 4.0

Designed & Developed by: Yektaweb