TY  - JOUR
AU  - Dagaev, Nikolay
AU  - Roads, Brett D.
AU  - Luo, Xiaoliang
AU  - Barry, Daniel N.
AU  - Patil, Kaustubh R.
AU  - Love, Bradley C.
TI  - A too-good-to-be-true prior to reduce shortcut reliance
JO  - Pattern recognition letters
VL  - 166
SN  - 0167-8655
CY  - Amsterdam [u.a.]
PB  - Elsevier
M1  - FZJ-2024-02496
SP  - 164 - 171
PY  - 2023
AB  - Despite their impressive performance in object recognition and other tasks under standard testing conditions, deep networks often fail to generalize to out-of-distribution (o.o.d.) samples. One cause for this shortcoming is that modern architectures tend to rely on ǣshortcutsǥ superficial features that correlate with categories without capturing deeper invariants that hold across contexts. Real-world concepts often possess a complex structure that can vary superficially across contexts, which can make the most intuitive and promising solutions in one context not generalize to others. One potential way to improve o.o.d. generalization is to assume simple solutions are unlikely to be valid across contexts and avoid them, which we refer to as the too-good-to-be-true prior. A low-capacity network (LCN) with a shallow architecture should only be able to learn surface relationships, including shortcuts. We find that LCNs can serve as shortcut detectors. Furthermore, an LCN’s predictions can be used in a two-stage approach to encourage a high-capacity network (HCN) to rely on deeper invariant features that should generalize broadly. In particular, items that the LCN can master are downweighted when training the HCN. Using a modified version of the CIFAR-10 dataset in which we introduced shortcuts, we found that the two-stage LCN-HCN approach reduced reliance on shortcuts and facilitated o.o.d. generalization.
LB  - PUB:(DE-HGF)16
C6  - 37915616
UR  - <Go to ISI:>//WOS:000935348300001
DO  - DOI:10.1016/j.patrec.2022.12.010
UR  - https://juser.fz-juelich.de/record/1024830
ER  -