| Home > Publications database > From Transparency to Reliability: Using AI Responsibly in Science |
| Lecture (Invited) | FZJ-2026-02284 |
2026
Abstract: Artificial intelligence is increasingly used to automate core steps of scientific research – from data analysis and hypothesis generation to experimental design. Because many of the models involved are opaque, consequential decisions about time, resources, and funding often rely on algorithmic processes whose internal logic cannot be directly examined. In publicly funded science, this raises a fundamental question of responsibility.This lecture argues that responsible AI use is not primarily a matter of transparency. What matters instead is the demonstrable reliability of scientific workflows – whether an AI-supported research process, when properly interpreted, reliably produces accurate results. Since the quality of complex neural networks cannot be adequately assessed by inspecting their internal mechanisms, reliability must be established at the level of the workflow as a whole, for example through theoretically grounded training data, robust validation studies, and systematic error analysis.By contrasting two cases – protein structure prediction and neuroimaging-based psychiatric prediction – the lecture shows that the conditions for establishing such reliability vary significantly across domains. What counts as responsible practice is therefore not merely a technical issue, but a context-sensitive question of scientific ethics.
|
The record appears in these collections: |