[Translate to English:]

Institute guidelines for the use of AI

The Institute for Social Policy refers to the following guidelines of the Elsevier publishing house when using AI in theses (Status 06.12.2024): 

“Where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work and not to replace key authoring tasks such as producing scientific, pedagogic, or medical insights, drawing scientific conclusions, or providing clinical recommendations. Applying the technology should be done with human oversight and control and all work should be reviewed and edited carefully, because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. The authors are ultimately responsible and accountable for the contents of the work. 

Authors should disclose in their manuscript the use of generative AI and AI-assisted technologies and a statement will appear in the published work. 

. . . .  

Authors should not list generative AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans.” 

The complete guidelines from Elsevier can be found here. 

Additions by the Institute:
  1. Key authoring tasks should not be transferred to AI. However, AI can be used to assist with these tasks, for example to support the initial screening of existing research.  Still, any kind of supporting input from AI must be critically reviewed, reflected upon and deepened by the authors, who bear full responsibility for the content of their manuscript.   

  2. Like other aids, the use of AI should be explicitly included in the list of aids and attached to the work. The documentation of the interaction with the AI (especially prompts) must be available upon request. 

  3.