TY - CONF
AU - Emam, Ahmed
AU - Farag, Mohamed
AU - Kierdorf, Jana
AU - Klingbeil, Lasse
AU - Rascher, Uwe
AU - Roscher, Ribana
A3 - Del Bue, Alessio
A3 - Canton, Cristian
A3 - Pont-Tuset, Jordi
A3 - Tommasi, Tatiana
TI - A Framework for Enhanced Decision Support in Digital Agriculture Using Explainable Machine Learning
VL - 15625
CY - Cham
PB - Springer Nature Switzerland
M1 - FZJ-2026-00388
SN - 978-3-031-91834-6 (print)
T2 - Lecture Notes in Computer Science
SP - 31 - 45
PY - 2025
AB - Model explainability, which integrates interpretability with domain knowledge, is crucial for assessing the reliability of machine learning frameworks, particularly in enhancing decision support in digital agriculture. Efforts have been made to establish a clear definition of explainability and develop new interpretability techniques. Assessing interpretability is essential to fully harness the potential of explainability. In this paper, we compare Gradient-weighted Class Activation Mapping, an interpretability technique for Convolutional Neural Networks, with Raw Attentions for Vision Transformers. We analyze both methods in an image-based task to classify the harvest-readiness of cauliflower plants. By developing a model-agnostic framework to compare models based on explainability, we pave the way for more reliable digital agriculture systems.
T2 - Computer Vision – ECCV 2024 Workshop
CY - 29 Sep 2024 - 4 Oct 2024, Milan (Italy)
Y2 - 29 Sep 2024 - 4 Oct 2024
M2 - Milan, Italy
LB - PUB:(DE-HGF)8 ; PUB:(DE-HGF)7
DO - DOI:10.1007/978-3-031-91835-3_3
UR - https://juser.fz-juelich.de/record/1050636
ER -