001     1031969
005     20241021205501.0
037 _ _ |a FZJ-2024-05893
100 1 _ |a Rathkopf, Charles
|0 P:(DE-Juel1)176538
|b 0
|u fzj
111 2 _ |a Helmholtz workshop on the ethics of AI in scientific practice
|c Jülich/Düsseldorf
|d 2024-06-11 -
|w Germany
245 _ _ |a Deep learning models in science: some risks and opportunities
260 _ _ |c 2024
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a Other
|2 DataCite
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a LECTURE_SPEECH
|2 ORCID
336 7 _ |a Talk (non-conference)
|b talk
|m talk
|0 PUB:(DE-HGF)31
|s 1729486553_25924
|2 PUB:(DE-HGF)
|x Other
336 7 _ |a Other
|2 DINI
520 _ _ |a Deep neural networks offer striking improvements in predictive accuracy in many areas of science, and in biological sequence modeling in particular. But that predictive power comes at a steep price: we must give up on interpretability. In this talk, I argue - contrary to many voices in AI ethics calling for more interpretable models - that this is a price we should be willing to pay.
536 _ _ |a 5255 - Neuroethics and Ethics of Information (POF4-525)
|0 G:(DE-HGF)POF4-5255
|c POF4-525
|f POF IV
|x 0
909 C O |o oai:juser.fz-juelich.de:1031969
|p VDB
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)176538
913 1 _ |a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|1 G:(DE-HGF)POF4-520
|0 G:(DE-HGF)POF4-525
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Decoding Brain Organization and Dysfunction
|9 G:(DE-HGF)POF4-5255
|x 0
914 1 _ |y 2024
920 1 _ |0 I:(DE-Juel1)INM-7-20090406
|k INM-7
|l Gehirn & Verhalten
|x 0
980 _ _ |a talk
980 _ _ |a VDB
980 _ _ |a I:(DE-Juel1)INM-7-20090406
980 _ _ |a UNRESTRICTED


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21