Search papers, labs, and topics across Lattice.
This paper introduces Stable Spike, a novel dual consistency optimization method for Spiking Neural Networks (SNNs) that uses bitwise AND operations to extract a stable spike skeleton from multi-timestep spike maps. By enforcing unstable spikes to converge to this skeleton and injecting amplitude-aware noise, the method enhances consistency and generalization. Experiments demonstrate significant improvements in neuromorphic object recognition, particularly under ultra-low latency conditions, with accuracy gains of up to 8.33%.
SNNs get a serious boost in accuracy and efficiency thanks to "Stable Spike," which uses bitwise AND operations to distill consistent, meaningful signals from noisy spike trains.
Although the temporal spike dynamics of spiking neural networks (SNNs) enable low-power temporal pattern capture capabilities, they also incur inherent inconsistencies that severely compromise representation. In this paper, we perform dual consistency optimization via Stable Spike to mitigate this problem, thereby improving the recognition performance of SNNs. With the hardware-friendly ``AND"bit operation, we efficiently decouple the stable spike skeleton from the multi-timestep spike maps, thereby capturing critical semantics while reducing inconsistencies from variable noise spikes. Enforcing the unstable spike maps to converge to the stable spike skeleton significantly improves the inherent consistency across timesteps. Furthermore, we inject amplitude-aware spike noise into the stable spike skeleton to diversify the representations while preserving consistent semantics. The SNN is encouraged to produce perturbation-consistent predictions, thereby contributing to generalization. Extensive experiments across multiple architectures and datasets validate the effectiveness and versatility of our method. In particular, our method significantly advances neuromorphic object recognition under ultra-low latency, improving accuracy by up to 8.33\%. This will help unlock the full power consumption and speed potential of SNNs.