| Home > Publications database > A Framework for Enhanced Decision Support in Digital Agriculture Using Explainable Machine Learning |
| Contribution to a conference proceedings/Contribution to a book | FZJ-2026-00388 |
; ; ; ; ; ; ; ; ;
2025
Springer Nature Switzerland
Cham
ISBN: 978-3-031-91834-6 (print), 978-3-031-91835-3 (electronic)
This record in other databases:
Please use a persistent id in citations: doi:10.1007/978-3-031-91835-3_3
Abstract: Model explainability, which integrates interpretability with domain knowledge, is crucial for assessing the reliability of machine learning frameworks, particularly in enhancing decision support in digital agriculture. Efforts have been made to establish a clear definition of explainability and develop new interpretability techniques. Assessing interpretability is essential to fully harness the potential of explainability. In this paper, we compare Gradient-weighted Class Activation Mapping, an interpretability technique for Convolutional Neural Networks, with Raw Attentions for Vision Transformers. We analyze both methods in an image-based task to classify the harvest-readiness of cauliflower plants. By developing a model-agnostic framework to compare models based on explainability, we pave the way for more reliable digital agriculture systems.
|
The record appears in these collections: |