Search papers, labs, and topics across Lattice.
This paper introduces a novel method for detecting and eliminating backdoor triggers in neural networks by analyzing active paths within the network. The approach identifies specific input patterns that activate malicious behavior implanted in the model. Experiments demonstrate the effectiveness of the method in the context of intrusion detection systems, where backdoors can severely compromise security.
Uncover hidden backdoors in your neural networks by tracing the active paths that malicious triggers exploit.
Machine learning backdoors have the property that the machine learning model should work as expected on normal inputs, but when the input contains a specific $\textit{trigger}$, it behaves as the attacker desires. Detecting such triggers has been proven to be extremely difficult. In this paper, we present a novel and explainable approach to detect and eliminate such backdoor triggers based on active paths found in neural networks. We present promising experimental evidence of our approach, which involves injecting backdoors into a machine learning model used for intrusion detection.