Search papers, labs, and topics across Lattice.
1
0
3
0
You can now get Transformer-level ASR accuracy with 6x smaller models and dramatically lower latency by using sliding-window self-attention, opening up new possibilities for interactive speech interfaces on edge devices.