001     1031976
005     20241213210707.0
037 _ _ |a FZJ-2024-05900
100 1 _ |a Rathkopf, Charles
|0 P:(DE-Juel1)176538
|b 0
|e Corresponding author
|u fzj
111 2 _ |a Uppsala Vienna AI Colloquium
|d 2024-10-25 -
|w online event
245 _ _ |a Hallucination, justification, and the role of generative AI in science
260 _ _ |c 2024
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a Other
|2 DataCite
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a LECTURE_SPEECH
|2 ORCID
336 7 _ |a Talk (non-conference)
|b talk
|m talk
|0 PUB:(DE-HGF)31
|s 1734070645_20223
|2 PUB:(DE-HGF)
|x Invited
336 7 _ |a Other
|2 DINI
520 _ _ |a Generative AI models are now being used to create synthetic climate data to improve the accuracy of climate models, and to construct virtual molecules which can then be synthesized for medical applications. But generative AI models are also notorious for their disposition to “hallucinate.” A recent Nature editorial defines hallucination as a process in which a generative model “makes up incorrect answers” (Jones, 2024). This raises an obvious puzzle. If generative models are prone to fabricating incorrect answers, how can they be used responsibly? In this talk I provide an analysis of the phenomenon of hallucination, and give special attention to diffusion models trained on scientific data (rather than transformers trained on natural language.) The goal of the paper is to work out how generative AI can be made compatible with reliabilist epistemology. I draw a distinction between parameter-space and feature-space deviations from the training data, and argue that hallucination is a subset of the latter. This allows us to recognize a class of cases in which the threat of hallucination simply does not arise. Among the remaining cases, I draw an additional distinction between deviations that are discoverable by algorithmic means, and those that are not. I then argue that if a deviation is discoverable by algorithmic means, reliability is not threatened, and that if the deviation is not so discoverable, then the generative model that produced it will be relevantly similar to other discovery procedures, and can therefore be accommodated within the reliabilist framework.
536 _ _ |a 5255 - Neuroethics and Ethics of Information (POF4-525)
|0 G:(DE-HGF)POF4-5255
|c POF4-525
|f POF IV
|x 0
909 C O |o oai:juser.fz-juelich.de:1031976
|p VDB
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)176538
913 1 _ |a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|1 G:(DE-HGF)POF4-520
|0 G:(DE-HGF)POF4-525
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Decoding Brain Organization and Dysfunction
|9 G:(DE-HGF)POF4-5255
|x 0
914 1 _ |y 2024
920 1 _ |0 I:(DE-Juel1)INM-7-20090406
|k INM-7
|l Gehirn & Verhalten
|x 0
980 _ _ |a talk
980 _ _ |a VDB
980 _ _ |a I:(DE-Juel1)INM-7-20090406
980 _ _ |a UNRESTRICTED


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21