Policy on Generative AI-Assisted Tools

Impact Journal permits responsible use of AI tools to support writing and publication, with full author accountability.

Core principles

  1. Authors are responsible and accountable for the accuracy and integrity of their work.
  2. AI tools must be used responsibly and transparently.
  3. AI tools should supplement (not replace) human scholarly work and judgment.

Authorship rule

AI tools, including Large Language Models (LLMs), cannot be credited as authors, as they cannot take responsibility for the integrity, originality, and validity of scholarly work.

Allowed use

  • Language and clarity editing of the authors’ own original text (e.g., grammar, readability, formatting), with authors remaining fully responsible for the final content.

Not allowed

  • Using generative AI to produce substantial new scholarly content (e.g., core arguments, analysis, or results) without author verification and appropriate referencing.
  • Any use that undermines transparency, originality, or research integrity.

Disclosure

If AI tools were used in a meaningful way beyond basic spelling/grammar checks, authors should disclose:

  • the tool name (and version/model if available), and
  • what was assisted (e.g., language editing, formatting).

The Editor and Publisher reserve the right to request clarification and to determine whether the use of an AI tool is acceptable.