2025 IEEE 36TH INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS, ASAP
Abstract
In recent years, Spiking Neural Networks (SNNs) have been increasingly deployed on Field Programmable Gate Arrays (FPGAs) for enabling low-energy AI inference. SNNs aim to enable more biomimetic processing than ANNs, thereby enabling more event-driven computing along with using cheaper arithmetic than multiply and accumulate operations. However, deploying large SNNs to achieve acceptable accuracy requires extensive use of configurable logic blocks (CLBs), leading to additional programmable routing and critical path delay. This study addresses these issues by exploring soft-logic architectures to reduce resource utilization for SNNs. We propose two architectures that provide a more efficient mapping of logic primitives for SNNs, reducing CLB usage by 13.49% and 20.13% compared to the Intel Stratix10 baseline. Implemented with an advanced technology node, these architectures achieve an average reduction of 8.30% and 7.30% in CLB area and reduce critical path delay by 2.72% and 3.42%, respectively, enabling larger SNNs with faster inference within the same programmable fabric.