001049202 001__ 1049202
001049202 005__ 20251217202227.0
001049202 0247_ $$2datacite_doi$$a10.34734/FZJ-2025-05284
001049202 037__ $$aFZJ-2025-05284
001049202 041__ $$aEnglish
001049202 1001_ $$0P:(DE-Juel1)200005$$aZhao, Xuan$$b0$$ufzj
001049202 1112_ $$aICML 2025 AIW$$cVancouver$$d2025-07-13 - 2025-07-19$$wCanada
001049202 245__ $$aClassifier Reconstruction Through Counterfactual-Aware Wasserstein Prototypes
001049202 260__ $$c2025
001049202 3367_ $$033$$2EndNote$$aConference Paper
001049202 3367_ $$2DataCite$$aOther
001049202 3367_ $$2BibTeX$$aINPROCEEDINGS
001049202 3367_ $$2DRIVER$$aconferenceObject
001049202 3367_ $$2ORCID$$aLECTURE_SPEECH
001049202 3367_ $$0PUB:(DE-HGF)6$$2PUB:(DE-HGF)$$aConference Presentation$$bconf$$mconf$$s1765992431_17930$$xOther
001049202 520__ $$aCounterfactual explanations provide actionable insights by identifying minimal input changes required to achieve a desired model prediction. Beyond their interpretability benefits, counterfactuals can also be leveraged for model reconstruction, where a surrogate model is trained to replicate the behavior of a target model. In this work, we demonstrate that model reconstruction can be significantly improved by recognizing that counterfactuals, which typically lie close to the decision boundary, can serve as informative—though less representative—samples for both classes. This is particularly beneficial in settings with limited access to labeled data. We propose a method that integrates original data samples with counterfactuals to approximate class prototypes using the Wasserstein barycenter, thereby preserving the underlying distributional structure of each class. This approach enhances the quality of the surrogate model and mitigates the issue of decision boundary shift, which commonly arises when counterfactuals are naively treated as ordinary training instances. Empirical results across multiple datasets show that our method improves fidelity between the surrogate and target models, validating its effectiveness.
001049202 536__ $$0G:(DE-HGF)POF4-5112$$a5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001049202 7001_ $$0P:(DE-Juel1)199019$$aCao, Zhuo$$b1$$ufzj
001049202 7001_ $$0P:(DE-Juel1)184644$$aBangun, Arya$$b2$$ufzj
001049202 7001_ $$0P:(DE-Juel1)129394$$aScharr, Hanno$$b3$$ufzj
001049202 7001_ $$0P:(DE-Juel1)188313$$aAssent, Ira$$b4$$ufzj
001049202 8564_ $$uhttps://juser.fz-juelich.de/record/1049202/files/150_Classifier_Reconstruction_.pdf$$yOpenAccess
001049202 909CO $$ooai:juser.fz-juelich.de:1049202$$popenaire$$popen_access$$pVDB$$pdriver
001049202 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)200005$$aForschungszentrum Jülich$$b0$$kFZJ
001049202 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)199019$$aForschungszentrum Jülich$$b1$$kFZJ
001049202 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)184644$$aForschungszentrum Jülich$$b2$$kFZJ
001049202 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)129394$$aForschungszentrum Jülich$$b3$$kFZJ
001049202 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188313$$aForschungszentrum Jülich$$b4$$kFZJ
001049202 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5112$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001049202 9141_ $$y2025
001049202 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001049202 920__ $$lyes
001049202 9201_ $$0I:(DE-Juel1)IAS-8-20210421$$kIAS-8$$lDatenanalyse und Maschinenlernen$$x0
001049202 980__ $$aconf
001049202 980__ $$aVDB
001049202 980__ $$aUNRESTRICTED
001049202 980__ $$aI:(DE-Juel1)IAS-8-20210421
001049202 9801_ $$aFullTexts