001     1023670
005     20250204113808.0
024 7 _ |a 10.1177/20539517241235871
|2 doi
024 7 _ |a 10.34734/FZJ-2024-01746
|2 datacite_doi
024 7 _ |a WOS:001175848600001
|2 WOS
037 _ _ |a FZJ-2024-01746
082 _ _ |a 004
100 1 _ |a Raz, Aviad
|0 P:(DE-HGF)0
|b 0
|e Corresponding author
245 _ _ |a Prediction and explainability in AI: Striking a new balance?
260 _ _ |a München
|c 2024
|b GBI-Genios Deutsche Wirtschaftsdatenbank GmbH
336 7 _ |a article
|2 DRIVER
336 7 _ |a Output Types/Journal article
|2 DataCite
336 7 _ |a Journal Article
|b journal
|m journal
|0 PUB:(DE-HGF)16
|s 1715066673_19357
|2 PUB:(DE-HGF)
336 7 _ |a ARTICLE
|2 BibTeX
336 7 _ |a JOURNAL_ARTICLE
|2 ORCID
336 7 _ |a Journal Article
|0 0
|2 EndNote
520 _ _ |a The debate regarding prediction and explainability in artificial intelligence (AI) centers around the trade-off between achieving high-performance accurate models and the ability to understand and interpret the decisionmaking process of those models. In recent years, this debate has gained significant attention due to the increasing adoption of AI systems in various domains, including healthcare, finance, and criminal justice. While prediction and explainability are desirable goals in principle, the recent spread of high accuracy yet opaque machine learning (ML) algorithms has highlighted the trade-off between the two, marking this debate as an inter-disciplinary, inter-professional arena for negotiating expertise. There is no longer an agreement about what should be the “default” balance of prediction and explainability, with various positions reflecting claims for professional jurisdiction. Overall, there appears to be a growing schism between the regulatory and ethics-based call for explainability as a condition for trustworthy AI, and how it is being designed, assimilated, and negotiated. The impetus for writing this commentary comes from recent suggestions that explainability is overrated, including the argument that explainability is not guaranteed in human healthcare experts either. To shed light on this debate, its premises, and its recent twists, we provide an overview of key arguments representing different frames, focusing on AI in healthcare.
536 _ _ |a 5255 - Neuroethics and Ethics of Information (POF4-525)
|0 G:(DE-HGF)POF4-5255
|c POF4-525
|f POF IV
|x 0
588 _ _ |a Dataset connected to DataCite
700 1 _ |a Heinrichs, Bert
|0 P:(DE-Juel1)166268
|b 1
700 1 _ |a Avnoon, Netta
|0 P:(DE-HGF)0
|b 2
700 1 _ |a Eyal, Gil
|0 P:(DE-HGF)0
|b 3
700 1 _ |a Inbar, Yael
|0 P:(DE-HGF)0
|b 4
773 _ _ |a 10.1177/20539517241235871
|g Vol. 11, no. 1, p. 20539517241235871
|0 PERI:(DE-600)2773948-X
|n 1
|p 20539517241235871
|t Big data & society
|v 11
|y 2024
|x 2053-9517
856 4 _ |y OpenAccess
|u https://juser.fz-juelich.de/record/1023670/files/raz-et-al-2024-prediction-and-explainability-in-ai-striking-a-new-balance.pdf
856 4 _ |y OpenAccess
|x icon
|u https://juser.fz-juelich.de/record/1023670/files/raz-et-al-2024-prediction-and-explainability-in-ai-striking-a-new-balance.gif?subformat=icon
856 4 _ |y OpenAccess
|x icon-1440
|u https://juser.fz-juelich.de/record/1023670/files/raz-et-al-2024-prediction-and-explainability-in-ai-striking-a-new-balance.jpg?subformat=icon-1440
856 4 _ |y OpenAccess
|x icon-180
|u https://juser.fz-juelich.de/record/1023670/files/raz-et-al-2024-prediction-and-explainability-in-ai-striking-a-new-balance.jpg?subformat=icon-180
856 4 _ |y OpenAccess
|x icon-640
|u https://juser.fz-juelich.de/record/1023670/files/raz-et-al-2024-prediction-and-explainability-in-ai-striking-a-new-balance.jpg?subformat=icon-640
909 C O |o oai:juser.fz-juelich.de:1023670
|p openaire
|p open_access
|p VDB
|p driver
|p dnbdelivery
910 1 _ |a Department of Sociology & Anthropology, Ben-Gurion University of the Negev, Beer-Sheba, Israel https://orcid.org/0000-0001-6268-0409 aviadraz@bgu.ac.il
|0 I:(DE-HGF)0
|b 0
|6 P:(DE-HGF)0
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 1
|6 P:(DE-Juel1)166268
913 1 _ |a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|1 G:(DE-HGF)POF4-520
|0 G:(DE-HGF)POF4-525
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Decoding Brain Organization and Dysfunction
|9 G:(DE-HGF)POF4-5255
|x 0
914 1 _ |y 2024
915 _ _ |a DBCoverage
|0 StatID:(DE-HGF)0160
|2 StatID
|b Essential Science Indicators
|d 2023-08-22
915 _ _ |a Creative Commons Attribution-NonCommercial-NoDerivs CC BY-NC-ND 4.0
|0 LIC:(DE-HGF)CCBYNCND4
|2 HGFVOC
915 _ _ |a Fees
|0 StatID:(DE-HGF)0700
|2 StatID
|d 2023-08-22
915 _ _ |a OpenAccess
|0 StatID:(DE-HGF)0510
|2 StatID
915 _ _ |a Article Processing Charges
|0 StatID:(DE-HGF)0561
|2 StatID
|d 2023-08-22
915 _ _ |a Nationallizenz
|0 StatID:(DE-HGF)0420
|2 StatID
|d 2025-01-07
|w ger
915 _ _ |a JCR
|0 StatID:(DE-HGF)0100
|2 StatID
|b BIG DATA SOC : 2022
|d 2025-01-07
915 _ _ |a DBCoverage
|0 StatID:(DE-HGF)0200
|2 StatID
|b SCOPUS
|d 2025-01-07
915 _ _ |a DBCoverage
|0 StatID:(DE-HGF)0300
|2 StatID
|b Medline
|d 2025-01-07
915 _ _ |a DBCoverage
|0 StatID:(DE-HGF)0501
|2 StatID
|b DOAJ Seal
|d 2024-04-04T14:31:58Z
915 _ _ |a DBCoverage
|0 StatID:(DE-HGF)0500
|2 StatID
|b DOAJ
|d 2024-04-04T14:31:58Z
915 _ _ |a Peer Review
|0 StatID:(DE-HGF)0030
|2 StatID
|b DOAJ : Double anonymous peer review
|d 2024-04-04T14:31:58Z
915 _ _ |a DBCoverage
|0 StatID:(DE-HGF)0199
|2 StatID
|b Clarivate Analytics Master Journal List
|d 2025-01-07
915 _ _ |a DBCoverage
|0 StatID:(DE-HGF)1180
|2 StatID
|b Current Contents - Social and Behavioral Sciences
|d 2025-01-07
915 _ _ |a DBCoverage
|0 StatID:(DE-HGF)0130
|2 StatID
|b Social Sciences Citation Index
|d 2025-01-07
915 _ _ |a IF >= 5
|0 StatID:(DE-HGF)9905
|2 StatID
|b BIG DATA SOC : 2022
|d 2025-01-07
920 1 _ |0 I:(DE-Juel1)INM-7-20090406
|k INM-7
|l Gehirn & Verhalten
|x 0
980 _ _ |a journal
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a I:(DE-Juel1)INM-7-20090406
980 1 _ |a FullTexts


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21