Search papers, labs, and topics across Lattice.
TN∑i=1nmax(0,δ−|V(t)i−Vth|),\mathcal{C}(V(t)_{l})=\frac{1}{TN}\sum_{i=1}^{n}\max(0,\delta-\left|V(t)_{i}-V_{\text{th}}\right|), (12) 𝒞(V(t)l)\mathcal{C}(V(t)_{l}) computes the average quadratic penalty when membrane potentials V(t)iV(t)_{i} of neurons in layer ll approach the firing threshold VthV_{\text{th}}. NN and TT represent the number of time steps and the total number of layers in the SNNs, respectively. The hyperparameter δ\delta establishes a margin around VthV{\text{th}} where proximate potentials incur proportional penalties. Subsequently, we integrate this constraint with the target loss function, defining the overall loss within the framework of Lagrangian constraints (Kim & Jeong, 2021; Yoo & Jeong, 2023), which can be expressed as: ℒ(𝐱,λ)=ℒoss(𝐱)+λ∑l𝒞(V(t)l).\mathcal{L}(\mathbf{x},\lambda)=\mathcal{L}oss(\mathbf{x})+\lambda\sum_{l}\mathcal{C}(V(t)_{l}). (13) Here, ℒoss(𝐱)\mathcal{L}oss(\mathbf{x}) is the original loss function, 𝒞(V(t)l)\mathcal{C}(V(t)_{l}) represents the penalty term for the membrane potentials across all layers, and λ\lambda is a dynamically adjusted parameter that controls the significance of the constraint. We reveal that using a fixed magnitude for λ\lambda hinders network convergence and constraint satisfaction. Specifically, a larger λ\lambda leads to significant performance degradation and poor convergence during the initial training phase, while a smaller λ\lambda fails to enforce the constraint effectively. Therefore, to achieve an optimal balance between gradients sparsity and performance, we propose dynamic λ\lambda, which can be described as: λ=0.
1
0
2
Securing Spiking Neural Networks against adversarial attacks can be achieved by moving neuron membrane potentials away from thresholds and introducing noise.