Lecture:

HIDA Lecture: LLMs Between Idea Generation and Scientific Responsibility

Wednesday, 22.04.2026 · 10:00 am
online

Speaker: Charles Rathkopf, Forschungszentrum Jülich

Date: 22.04.2026, 10:00 am

Title: Hypotheses from the Black Box: LLMs Between Idea Generation and Scientific Responsibility

Abstract:

Artificial intelligence is increasingly used to automate core steps of scientific research – from data analysis and hypothesis generation to experimental design. Because many of the models involved are opaque, consequential decisions about time, resources, and funding often rely on algorithmic processes whose internal logic cannot be directly examined. In publicly funded science, this raises a fundamental question of responsibility.

This lecture argues that responsible AI use is not primarily a matter of transparency. What matters instead is the demonstrable reliability of scientific workflows – whether an AI-supported research process, when properly interpreted, reliably produces accurate results. Since the quality of complex neural networks cannot be adequately assessed by inspecting their internal mechanisms, reliability must be established at the level of the workflow as a whole, for example through theoretically grounded training data, robust validation studies, and systematic error analysis.

By contrasting two cases – protein structure prediction and neuroimaging-based psychiatric prediction – the lecture shows that the conditions for establishing such reliability vary significantly across domains. What counts as responsible practice is therefore not merely a technical issue, but a context-sensitive question of scientific ethics.

Register now!

Charles Rathkopf 

Dr. Charles Rathkopf is a Research Associate in the “Neuroethics and Ethics of AI” research group at Forschungszentrum Jülich and is affiliated with the University of Bonn. His work brings together philosophy of mind, neuroscience, and AI ethics. He studies how the cognitive capacities of artificial systems can be properly evaluated, the epistemic risks associated with large language models, and the conditions under which AI systems may exhibit deception-like behavior. A further focus of his research concerns how reliability and responsibility can be justified in the scientific use of AI. 

Learn more!

Alternativ-Text

Subscribe newsletter