Ramachandra, SandeepSandeepRamachandraDegraeve, VicVicDegraeveVandewiele, GillesGillesVandewieleSteenwinckel, BramBramSteenwinckelVan Hoecke, SofieSofieVan HoeckeOngenae, FemkeFemkeOngenae2025-06-112025-05-302025-06-1120250218-1940WOS:001494414700001https://imec-publications.be/handle/20.500.12860/45735The inception of the Relational Graph Convolutional Network (R-GCN) marked a milestone in the Semantic Web domain as a widely cited method that generalizes end-to-end hierarchical representation learning to Knowledge Graphs (KGs). R-GCNs generate representations for nodes of interest by repeatedly aggregating parametrized, relation-specific transformations of their neighbors. However, in this work, it is posited that the R-GCN’s main contribution lies in this “message passing” paradigm, rather than the learned weights. To prove this, the “Random Relational Graph Convolutional Network” (RR-GCN) is introduced, which leaves all parameters untrained and thus constructs node embeddings by aggregating randomly transformed random representations from neighbors. Additionally, the advantage offered by learnable parameters for RR-GCN without completely losing the advantages of random transformations is explored. It is empirically shown that RR-GCNs can compete with fully trained R-GCNs in node classification.RR-GCN: Exploring Untrained Random Embeddings for Relational GraphsJournal article10.1142/S0218194025500184WOS:001494414700001