Sepulveda, EduardoEduardoSepulvedaVandervorst, FelixFelixVandervorstBaesens, BartBartBaesensVerdonck, TimTimVerdonck2025-03-162025-03-162025-MAY 10957-4174WOS:001440912800001https://imec-publications.be/handle/20.500.12860/45406Machine learning is increasingly focused on improving performance metrics and providing explanations for model decisions. However, this focus often overshadows the importance of maintaining prediction stability for similar instances, an essential factor for user trust in the system’s reliability and consistency. This study extends the ranking stability measure, by integrating SHAP (SHapley Additive exPlanations) values to enhance prediction interpretability in anomaly detection. Unlike some existing stability measures that focus on predictions or treat all features as equally important, our approach systematically evaluates the stability of local interpretability introducing a novel weighting mechanism that prioritizes variations in the top-ranked features over lower-ranked ones, our method provides a refined assessment of interpretability stability, making it particularly valuable for high-stakes domains like fraud detection and risk assessment. Our approach, designed to boost model reliability and trust, addresses the critical need for understanding decision-contributing factors in business applications. We offer a comprehensive and robust solution for organizations to effectively utilize machine learning. Through extensive and rigorous comparative evaluations using both synthetic and real-world datasets, our method demonstrates superior performance in ensuring stable and reliable feature importance rankings compared to prior approaches. Our methodology not only stands out in enhancing model performance and interpretability but also bridges the gap between complex machine learning models and practical usability. Our method undoubtedly contributes to fostering confidence and trust among non-specialists. Our code is publicly available for reproducibility and to encourage further research in this field.Enhancing explainability in real-world scenarios: Towards a robust stability measure for local interpretabilityJournal article10.1016/j.eswa.2025.126922WOS:001440912800001