Publication:

Enhancing explainability in real-world scenarios: Towards a robust stability measure for local interpretability

Date

 
cris.virtual.department#PLACEHOLDER_PARENT_METADATA_VALUE#
cris.virtual.orcid0000-0003-1105-2028
cris.virtualsource.department9afc668f-2996-40fd-a55d-40da6e9270e1
cris.virtualsource.orcid9afc668f-2996-40fd-a55d-40da6e9270e1
dc.contributor.authorSepulveda, Eduardo
dc.contributor.authorVandervorst, Felix
dc.contributor.authorBaesens, Bart
dc.contributor.authorVerdonck, Tim
dc.contributor.imecauthorVerdonck, Tim
dc.contributor.orcidimecVerdonck, Tim::0000-0003-1105-2028
dc.date.accessioned2025-03-16T17:40:12Z
dc.date.available2025-03-16T17:40:12Z
dc.date.issued2025-MAY 15
dc.description.abstractMachine learning is increasingly focused on improving performance metrics and providing explanations for model decisions. However, this focus often overshadows the importance of maintaining prediction stability for similar instances, an essential factor for user trust in the system’s reliability and consistency. This study extends the ranking stability measure, by integrating SHAP (SHapley Additive exPlanations) values to enhance prediction interpretability in anomaly detection. Unlike some existing stability measures that focus on predictions or treat all features as equally important, our approach systematically evaluates the stability of local interpretability introducing a novel weighting mechanism that prioritizes variations in the top-ranked features over lower-ranked ones, our method provides a refined assessment of interpretability stability, making it particularly valuable for high-stakes domains like fraud detection and risk assessment. Our approach, designed to boost model reliability and trust, addresses the critical need for understanding decision-contributing factors in business applications. We offer a comprehensive and robust solution for organizations to effectively utilize machine learning. Through extensive and rigorous comparative evaluations using both synthetic and real-world datasets, our method demonstrates superior performance in ensuring stable and reliable feature importance rankings compared to prior approaches. Our methodology not only stands out in enhancing model performance and interpretability but also bridges the gap between complex machine learning models and practical usability. Our method undoubtedly contributes to fostering confidence and trust among non-specialists. Our code is publicly available for reproducibility and to encourage further research in this field.
dc.description.wosFundingTextThis research received funding from the Agencia Nacional de Investigacion y Desarrollo (ANID) , Chile, ANID BECAS/DOCTORADO EXTRANJERO 72220229.
dc.identifier.doi10.1016/j.eswa.2025.126922
dc.identifier.issn0957-4174
dc.identifier.urihttps://imec-publications.be/handle/20.500.12860/45406
dc.publisherPERGAMON-ELSEVIER SCIENCE LTD
dc.source.beginpage126922
dc.source.journalEXPERT SYSTEMS WITH APPLICATIONS
dc.source.numberofpages10
dc.source.volume274
dc.title

Enhancing explainability in real-world scenarios: Towards a robust stability measure for local interpretability

dc.typeJournal article
dspace.entity.typePublication
Files
Publication available in collections: