Abbo, Giulio AntonioGiulio AntonioAbboBelpaeme, TonyTonyBelpaeme2026-03-242026-03-242025979-8-3503-7894-82167-2121https://imec-publications.be/handle/20.500.12860/58926In the rapidly evolving landscape of human-robot interaction, the integration of vision capabilities into conversational agents stands as a crucial advancement. This paper presents a ready-to-use implementation of a dialogue manager that leverages the latest progress in Large Language Models (e.g., GPT-4o mini) to enhance the traditional text-based prompts with real-time visual input. LLMs are used to interpret both textual prompts and visual stimuli, creating a more contextually aware conversational agent. The system's prompt engineering, incorporating dialogue with summarisation of the images, en-sures a balance between context preservation and computational efficiency. Six interactions with a Furhat robot powered by this system are reported, illustrating and discussing the results obtained. The system can be customised and is available as a stand-alone application, a Furhat robot implementation, and a ROS2 package.engI Was Blind but Now I See: Implementing Vision-Enabled Dialogue in Social RobotsProceedings paper10.1109/HRI61500.2025.10973830WOS:001492540600131