| Home > Publications database > Deep learning models in science: some risks and opportunities |
| Talk (non-conference) (Other) | FZJ-2024-05893 |
2024
Abstract: Deep neural networks offer striking improvements in predictive accuracy in many areas of science, and in biological sequence modeling in particular. But that predictive power comes at a steep price: we must give up on interpretability. In this talk, I argue - contrary to many voices in AI ethics calling for more interpretable models - that this is a price we should be willing to pay.
|
The record appears in these collections: |