Publication:

Online Prediction of User Enjoyment in Human-Robot Dialogue with LLMs

 
cris.virtual.department#PLACEHOLDER_PARENT_METADATA_VALUE#
cris.virtual.department#PLACEHOLDER_PARENT_METADATA_VALUE#
cris.virtual.orcid0000-0002-1790-9531
cris.virtual.orcid0000-0001-5207-7745
cris.virtualsource.department60910c8d-eace-48b6-8e4d-3c2fff94428a
cris.virtualsource.department6c1aac4b-593e-4f80-9ecc-911fd20f3c31
cris.virtualsource.orcid60910c8d-eace-48b6-8e4d-3c2fff94428a
cris.virtualsource.orcid6c1aac4b-593e-4f80-9ecc-911fd20f3c31
dc.contributor.authorJanssens, Ruben
dc.contributor.authorPereira, Andre
dc.contributor.authorSkantze, Gabriel
dc.contributor.authorIrfan, Bahar
dc.contributor.authorBelpaeme, Tony
dc.date.accessioned2026-03-24T13:42:07Z
dc.date.available2026-03-24T13:42:07Z
dc.date.createdwos2025-10-29
dc.date.issued2025
dc.description.abstractLarge Language Models (LLMs) allow social robots to engage in unconstrained open-domain dialogue, but often make mistakes when employed in real-world interactions, requiring adaptation of LLMs to specific conversational contexts. However, LLM adaptation techniques require a feedback signal, ideally for multiple alternative utterances. At the same time, human-robot dialogue data is scarce and research often relies on external annotators. A tool for automatic prediction of user enjoyment in human-robot dialogue is therefore needed. We investigate the possibility of predicting user enjoyment turn-by-turn using an LLM, giving it a proposed robot utterance within the dialogue context, but without access to user response. We compare this performance to the system's enjoyment ratings when user responses are available and to assessments by expert human annotators, in addition to self-reported user perceptions. We evaluate the proposed LLM predictor in a human-robot interaction (HRI) dataset with conversation transcripts of 25 older adults' 7-minute dialogues with a companion robot. Our results show that an LLM is capable of predicting user enjoyment, without loss of performance despite the lack of user response and even achieving performance similar to that of human expert annotators. Furthermore, results show that the system surpasses expert annotators in its correlation with the user's self-reported perceptions of the conversation. This work presents a tool to remove the reliance on external annotators for enjoyment evaluation and paves the way toward real-time adaptation in human-robot dialogue.
dc.description.wosFundingTextThis research received funding from the Flemish Government (AI Research Program), the Research Foundation Flanders -FWO (grant V449824N), and KTH Digital Futures (Sweden).
dc.identifier.doi10.1109/HRI61500.2025.10973944
dc.identifier.isbn979-8-3503-7894-8
dc.identifier.issn2167-2121
dc.identifier.urihttps://imec-publications.be/handle/20.500.12860/58931
dc.language.isoeng
dc.provenance.editstepusergreet.vanhoof@imec.be
dc.publisherIEEE
dc.source.beginpage1363
dc.source.conference20th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
dc.source.conferencedate2025-03-04
dc.source.conferencelocationMelbourne
dc.source.endpage1367
dc.source.journal2025 20TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI
dc.source.numberofpages5
dc.title

Online Prediction of User Enjoyment in Human-Robot Dialogue with LLMs

dc.typeProceedings paper
dspace.entity.typePublication
imec.internal.crawledAt2025-10-22
imec.internal.sourcecrawler
Files
Publication available in collections: