Publication:

Event-based video reconstruction via attention-based recurrent network

 
dc.contributor.authorMa, Wenwen
dc.contributor.authorMa, Shanxing
dc.contributor.authorMeiresone, Pieter
dc.contributor.authorAllebosch, Gianni
dc.contributor.authorPhilips, Wilfried
dc.contributor.authorAelterman, Jan
dc.contributor.imecauthorMa, Wenwen
dc.contributor.imecauthorMa, Shanxing
dc.contributor.imecauthorMeiresone, Pieter
dc.contributor.imecauthorAllebosch, Gianni
dc.contributor.imecauthorPhilips, Wilfried
dc.contributor.imecauthorAelterman, Jan
dc.contributor.orcidimecMa, Shanxing::0000-0001-5650-0168
dc.contributor.orcidimecMeiresone, Pieter::0000-0002-1227-9032
dc.contributor.orcidimecAllebosch, Gianni::0000-0003-2502-3746
dc.contributor.orcidimecPhilips, Wilfried::0000-0003-4456-4353
dc.contributor.orcidimecAelterman, Jan::0000-0002-5543-2631
dc.date.accessioned2025-03-11T18:30:58Z
dc.date.available2025-03-11T18:30:58Z
dc.date.issued2025
dc.description.abstractEvent cameras are novel sensors that capture brightness changes in the form of asynchronous events rather than intensity frames, offering unique advantages such as high dynamic range, high temporal resolution, and no motion blur. However, the sparse, asynchronous nature of event data poses significant challenges for visual perception, limiting compatibility with conventional computer vision algorithms that rely on dense, continuous frames. Event-based video reconstruction has emerged as a promising solution, though existing methods still face challenges in capturing fine-grained details and enhancing contrast. This paper presents a novel approach to video reconstruction from asynchronous event streams, leveraging the unique properties of event data to produce high-quality video. Our method integrates channel and pixel attention mechanisms to focus on essential features and incorporates deformable convolutions and adaptive mix-up operations to provide flexible receptive fields and dynamic fusion across down-sampling and up-sampling layers. Experimental results on multiple real-world event datasets demonstrate that our approach outperforms comparable methods trained on the same dataset, achieving superior video quality from pure event data. We also demonstrate the capability of our method for high dynamic range reconstruction and color video reconstruction using an event camera equipped with a Bayer filter.
dc.description.wosFundingTextThis work was supported by the China Scholarship Council (grant 202206280043) and received partial funding from the Flemish Government under the "Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen" programme.
dc.identifier.doi10.1016/j.neucom.2025.129776
dc.identifier.issn0925-2312
dc.identifier.urihttps://imec-publications.be/handle/20.500.12860/45382
dc.publisherELSEVIER
dc.source.beginpage129776
dc.source.journalNEUROCOMPUTING
dc.source.numberofpages12
dc.source.volume632
dc.subject.keywordsCAMERA DATASET
dc.subject.keywordsVISION
dc.title

Event-based video reconstruction via attention-based recurrent network

dc.typeJournal article
dspace.entity.typePublication
Files
Publication available in collections: