Talk (non-conference) (Other) FZJ-2024-05893

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Deep learning models in science: some risks and opportunities



2024

Helmholtz workshop on the ethics of AI in scientific practice, Jülich/DüsseldorfJülich/Düsseldorf, Germany, 11 Jun 20242024-06-11

Abstract: Deep neural networks offer striking improvements in predictive accuracy in many areas of science, and in biological sequence modeling in particular. But that predictive power comes at a steep price: we must give up on interpretability. In this talk, I argue - contrary to many voices in AI ethics calling for more interpretable models - that this is a price we should be willing to pay.


Contributing Institute(s):
  1. Gehirn & Verhalten (INM-7)
Research Program(s):
  1. 5255 - Neuroethics and Ethics of Information (POF4-525) (POF4-525)

Appears in the scientific report 2024
Click to display QR Code for this record

The record appears in these collections:
Document types > Presentations > Talks (non-conference)
Institute Collections > INM > INM-7
Workflow collections > Public records
Publications database

 Record created 2024-10-18, last modified 2024-10-21



Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)