Search papers, labs, and topics across Lattice.
6
0
4
18
You can reliably decode frustration from facial muscle activity, even when people aren't speaking aloud.
Wearable sensors and speech AI can now unobtrusively reveal the hidden communication dynamics driving hospital caregiver workload and stress.
Speech tokenizers, despite being crucial for multimodal LLMs, primarily capture phonetic information, creating a semantic mismatch with text-derived semantics that hinders performance.
Control the accent of your TTS system in multiple languages without ever training on accented data.
Achieve accent-specific speech synthesis without any accented training data by cleverly combining phonological rules with multilingual TTS.
A new multimodal dataset links brain activity, muscle activation, and articulation in speech, opening doors to understanding the causal chain of speech production.