Search papers, labs, and topics across Lattice.
This paper investigates methods to create less detectable over-the-air adversarial attacks on the Wav2Vec speech recognition model. The research focuses on reducing the human audibility of adversarial perturbations while maintaining attack effectiveness. The study explores various techniques to minimize detectability, analyzing their impact on the success rate of malicious transcription alterations.
Stealthier over-the-air adversarial attacks on speech recognition are possible, but require careful balancing of audibility and effectiveness.
Automatic speech recognition systems based on neural networks are vulnerable to adversarial attacks that alter transcriptions in a malicious way. Recent works in this field have focused on making attacks work in over-the-air scenarios, however such attacks are typically detectable by human hearing, limiting their potential applications. In the present work we explore different approaches of making over-the-air attacks less detectable, as well as the impact these approaches have on the attacks'effectiveness.