001031969 001__ 1031969 001031969 005__ 20241021205501.0 001031969 037__ $$aFZJ-2024-05893 001031969 1001_ $$0P:(DE-Juel1)176538$$aRathkopf, Charles$$b0$$ufzj 001031969 1112_ $$aHelmholtz workshop on the ethics of AI in scientific practice$$cJülich/Düsseldorf$$d2024-06-11 - $$wGermany 001031969 245__ $$aDeep learning models in science: some risks and opportunities 001031969 260__ $$c2024 001031969 3367_ $$033$$2EndNote$$aConference Paper 001031969 3367_ $$2DataCite$$aOther 001031969 3367_ $$2BibTeX$$aINPROCEEDINGS 001031969 3367_ $$2ORCID$$aLECTURE_SPEECH 001031969 3367_ $$0PUB:(DE-HGF)31$$2PUB:(DE-HGF)$$aTalk (non-conference)$$btalk$$mtalk$$s1729486553_25924$$xOther 001031969 3367_ $$2DINI$$aOther 001031969 520__ $$aDeep neural networks offer striking improvements in predictive accuracy in many areas of science, and in biological sequence modeling in particular. But that predictive power comes at a steep price: we must give up on interpretability. In this talk, I argue - contrary to many voices in AI ethics calling for more interpretable models - that this is a price we should be willing to pay. 001031969 536__ $$0G:(DE-HGF)POF4-5255$$a5255 - Neuroethics and Ethics of Information (POF4-525)$$cPOF4-525$$fPOF IV$$x0 001031969 909CO $$ooai:juser.fz-juelich.de:1031969$$pVDB 001031969 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)176538$$aForschungszentrum Jülich$$b0$$kFZJ 001031969 9131_ $$0G:(DE-HGF)POF4-525$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5255$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vDecoding Brain Organization and Dysfunction$$x0 001031969 9141_ $$y2024 001031969 9201_ $$0I:(DE-Juel1)INM-7-20090406$$kINM-7$$lGehirn & Verhalten$$x0 001031969 980__ $$atalk 001031969 980__ $$aVDB 001031969 980__ $$aI:(DE-Juel1)INM-7-20090406 001031969 980__ $$aUNRESTRICTED