Generative AI

BKM Journal: Policy on the Use of AI by Authors

Based on PLOS ONE and Elsevier AI policies
References:
PLOS ONE. Artificial intelligence (AI) tools in manuscript preparation. https://journals.plos.org/digitalhealth/s/ethical-publishing-practice 

Elsevier. Policy on the use of generative AI and AI-assisted technologies. https://www.elsevier.com/en-gb/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier 

1. Authorship and Responsibility

  • AI cannot be listed as an author. AI tools, including large language models and generative AI, cannot take responsibility for the accuracy, integrity, or originality of the work.
  • Authors are fully responsible for all content in their manuscripts, including text, figures, tables, and any material generated or assisted by AI. Authors must verify the factual correctness and ethical compliance of all AI-assisted content.
  • Human oversight is mandatory. AI-generated text, analysis, or figures must be critically reviewed, edited, and validated by the authors

2. Transparency and Disclosure

  • Authors must disclose any AI use in manuscript preparation. Disclosure must be specific, clear, and transparent, and should include:
    a) Name(s) of the AI tool(s) used (e.g., ChatGPT, Bard, Grammarly, GPT‑4, MidJourney)
    b) Purpose(s) of AI use (e.g., language refinement, drafting sections, brainstorming, data analysis, figure generation);
    c) Extent of AI contribution and author oversight.
  • Disclosure should appear in one of the following:
    a) Acknowledgments section (for general writing assistance);
    b) Methods section (if AI contributed to data processing, analysis, or figure generation);
    c) Dedicated “Declaration of AI Use” immediately before references.

Example statement:
“Portions of this manuscript were drafted and edited using [AI tool, e.g., ChatGPT‑4]. The authors used this tool for [specific purpose, e.g., improving grammar and clarity in the Introduction]. All AI-generated content was reviewed, edited, and validated by the authors, who take full responsibility for the final content.”

3. Permissible Uses (with Disclosure)
AI may assist authors in:

  • Language and grammar refinement: Improving readability, grammar, spelling, and sentence structure;
  • Drafting support: Generating initial drafts of non-critical sections, fully revised by authors;
  • Brainstorming and idea generation: Outlining or conceptualizing ideas;
  • Data analysis and visualization assistance: Only if fully verified and reproducible by authors; methods must be described in Methods section;
  • Literature summarization: AI-assisted summaries must be fact-checked and properly cited.

4. Prohibited Uses
AI must not be used to:

  • Fabricate data, results, or references;
  • Conduct core intellectual work (e.g., designing experiments, drawing conclusions);
  • Commit plagiarism by presenting AI-generated content as fully original without attribution;
  • Violate confidentiality, including uploading unpublished manuscripts to public AI platforms;
  • Misrepresent research, methods, or results.

5. Accountability

  • Authors retain full responsibility for all manuscript content, regardless of AI assistance.
  • AI cannot reduce the ethical or professional obligations of authors to ensure accuracy, originality, and compliance with publication standards.

6. Consequences of Misuse

  • Failure to comply with AI policies may result in:
  • Rejection of submitted manuscripts;
  • Retraction of published articles;
  • Banning authors from future submissions;
  • Notification to the authors’ institution or relevant ethics committees.

 

BKM Journal: Policy on the Use of AI by Editors and Reviewers

Based on PLOS ONE policies on AI in peer review
Reference: PLOS ONE. Artificial intelligence (AI) tools in peer review. https://journals.plos.org/digitalhealth/s/ethical-publishing-practice 

1. Principles
Editorial decisions and peer review must be grounded in independent human judgment. AI tools may not replace scientific assessment, and confidentiality, integrity, and quality must be preserved at all times.

2. Manuscript Confidentiality
Editors and reviewers must treat all submitted manuscripts as strictly confidential. Uploading manuscript content or review material to public AI services (e.g., ChatGPT, Bard) is strictly prohibited, as it violates confidentiality and intellectual property rights.

3. Prohibited AI Use
Reviewers must not use AI to:
a) Evaluate scientific content;
b) Generate review reports or editorial recommendations;
c) Make decisions on acceptance or rejection.
d) Peer review must reflect direct human expertise and assessment.

4. Permitted AI Use
AI may be used only for language editing, grammar correction, or clarity improvement in review reports, provided that:
a) AI is not used to assess the manuscript’s scientific content;
b) Use of AI is disclosed explicitly in review comments or editorial correspondence.
Example disclosure:
“AI tools were used for language editing only; no AI was used to evaluate the scientific content or inform editorial decisions.”

5. Accountability
Editors and reviewers retain full responsibility for all judgments, recommendations, and decisions. AI use does not reduce professional accountability.

6. Consequences of Misuse
Violating this policy — including uploading manuscripts to AI tools or using AI to assess scientific content without disclosure — is considered a breach of publication ethics and may result in:
a) Revocation of reviewer privileges;
b) Editorial review of processes for confidentiality compliance;
c) Additional administrative action consistent with journal policy