TY - JOUR AU - Raz, Aviad AU - Heinrichs, Bert AU - Avnoon, Netta AU - Eyal, Gil AU - Inbar, Yael TI - Prediction and explainability in AI: Striking a new balance? JO - Big data & society VL - 11 IS - 1 SN - 2053-9517 CY - München PB - GBI-Genios Deutsche Wirtschaftsdatenbank GmbH M1 - FZJ-2024-01746 SP - 20539517241235871 PY - 2024 AB - The debate regarding prediction and explainability in artificial intelligence (AI) centers around the trade-off between achieving high-performance accurate models and the ability to understand and interpret the decisionmaking process of those models. In recent years, this debate has gained significant attention due to the increasing adoption of AI systems in various domains, including healthcare, finance, and criminal justice. While prediction and explainability are desirable goals in principle, the recent spread of high accuracy yet opaque machine learning (ML) algorithms has highlighted the trade-off between the two, marking this debate as an inter-disciplinary, inter-professional arena for negotiating expertise. There is no longer an agreement about what should be the “default” balance of prediction and explainability, with various positions reflecting claims for professional jurisdiction. Overall, there appears to be a growing schism between the regulatory and ethics-based call for explainability as a condition for trustworthy AI, and how it is being designed, assimilated, and negotiated. The impetus for writing this commentary comes from recent suggestions that explainability is overrated, including the argument that explainability is not guaranteed in human healthcare experts either. To shed light on this debate, its premises, and its recent twists, we provide an overview of key arguments representing different frames, focusing on AI in healthcare. LB - PUB:(DE-HGF)16 UR - <Go to ISI:>//WOS:001175848600001 DO - DOI:10.1177/20539517241235871 UR - https://juser.fz-juelich.de/record/1023670 ER -