Publication:

GF-LRP: A Method for Explaining Predictions Made by Variational Graph Auto-Encoders

 
cris.virtual.department#PLACEHOLDER_PARENT_METADATA_VALUE#
cris.virtual.department#PLACEHOLDER_PARENT_METADATA_VALUE#
cris.virtual.orcid0000-0001-9300-5860
cris.virtual.orcid0000-0003-4129-3295
cris.virtualsource.department90f2bec3-f84d-4738-9103-ba2cd2f04cbc
cris.virtualsource.departmente8c396a0-cf75-4086-988d-01e9c63bd3c9
cris.virtualsource.orcid90f2bec3-f84d-4738-9103-ba2cd2f04cbc
cris.virtualsource.orcide8c396a0-cf75-4086-988d-01e9c63bd3c9
dc.contributor.authorRodrigo, Esther
dc.contributor.authorDeligiannis, Nikolaos
dc.contributor.imecauthorRodrigo-Bonet, Esther
dc.contributor.imecauthorDeligiannis, Nikolaos
dc.contributor.orcidimecDeligiannis, Nikolaos::0000-0001-9300-5860
dc.date.accessioned2025-03-26T10:21:29Z
dc.date.available2024-07-16T18:13:20Z
dc.date.available2025-03-26T10:21:29Z
dc.date.issued2025
dc.description.abstractVariational graph autoencoders (VGAEs) combine the best of graph convolutional networks (GCNs) and variational inference and have been used to address various tasks such as node classification or link prediction. However, the lack of explainability is a limiting factor when trustworthy decisions are required. In this paper, we present a novel post-hoc explainability framework for VGAEs that considers their encoder-decoder architecture. Specifically, we propose a layer-wise-relevance-propagation-based (LRP-based) explanation technique coined GF-LRP which, to our knowledge, is the first explanation method for VGAEs. GF-LRP goes beyond existing LRP techniques for GCNs by taking into account, in addition to input features and the graph structure of the data, the VGAE branch-specific architecture. The explanations are branch-specific in the sense that we explain the mean and standard deviation branches of the Gaussian distribution learned by the model. For a node's prediction, GF-LRP infers the most relevant features, nodes and its edges. To prove the effectiveness of our explanation method, we compute fidelity, sparsity and contrastivity as well as commonly employed evaluation metrics. Extensive experiments and visualizations on two real-world datasets demonstrate the effectiveness of the proposed explanation method.
dc.description.wosFundingTextThis work was supported in part by the Research Foundation-Flanders (FWO) through the Ph.D. Fellowship Strategic Basic Research under Project 1SC4521N, in part by IMEC through the AAA Project AI-based Air Quality Map and Analytics, and in part by the Flemish Government, through the "Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen" Programme.
dc.identifier.doi10.1109/TETCI.2024.3419714
dc.identifier.issn2471-285X
dc.identifier.urihttps://imec-publications.be/handle/20.500.12860/44163
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
dc.source.beginpage281
dc.source.endpage291
dc.source.issue1
dc.source.journalIEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE
dc.source.numberofpages11
dc.source.volume9
dc.subject.keywordsAIR-QUALITY
dc.title

GF-LRP: A Method for Explaining Predictions Made by Variational Graph Auto-Encoders

dc.typeJournal article
dspace.entity.typePublication
Files

Original bundle

Name:
GF-LRP_A_Method_for_Explaining_Predictions_Made_by_Variational_Graph_Auto-Encoders.pdf
Size:
6.8 MB
Format:
Adobe Portable Document Format
Description:
Published
Publication available in collections: