%0 Conference Paper
%A Hamdan, Sami
%A Love, Bradley C.
%A Polier, Georg von
%A Weis, Susanne
%A Schwender, Holger
%A Eickhoff, Simon
%A Patil, Kaustubh
%T Cofound-Leakage: Confound Removal In Machine Learning Leads To Leakage
%M FZJ-2023-03045
%D 2023
%Z Acknowledgments: This work was partly supported by the Helmholtz-AI project DeGen, the Helmholtz Portfolio Theme ‘Supercomputing and Modeling for the Human Brain’ and Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Poster: Pitfalls of Confound Regression in Machine Learning. Der Postertitel lautet anders, doch das war ok fuer OHBM Veranstalter
%X Modern Machine Learning (ML) approaches are now regularly employed forindividual-level prediction, e.g. personalized medicine.Particularly in such critical-decision making, it is of utmost importance to not onlyachieve high accuracy but also to trust that models rely on actual features-targetrelationships [1, 2]. To this end, it is crucial to consider confounding variables as theycan obstruct the features-target relationship. For instance, a researcher might wantto identify a biomarker showing high classification accuracy between controls andpatients. However, the model might have just learned simpler confounders like ageor sex as a good proxy of the disease [3]. To counteract such unwanted confoundingeffects, investigators often use linear models to remove confounding variables fromeach feature separately before employing ML. While this confound regression (CR)approach is popular [4], its pitfalls, especially when paired with non-linear MLmodels, are not well understood.
%B Organization for Human Brain Mapping (OHBM)
%C 22 Jul 2023 - 26 Jul 2023, Montreal (Canada)
Y2 22 Jul 2023 - 26 Jul 2023
M2 Montreal, Canada
%F PUB:(DE-HGF)24
%9 Poster
%R 10.34734/FZJ-2023-03045
%U https://juser.fz-juelich.de/record/1010405