Search papers, labs, and topics across Lattice.
3
0
5
2
A server-driven adaptive sampling approach slashes power consumption in wireless iBCIs by 40mW while *improving* decoding accuracy.
On-device LLM inference gets a massive speed and energy boost by adaptively streaming only the most expensive parts of the KV cache from the cloud.
Achieve near-perfect speech recognition at a ridiculously low 200 bits per second by using reinforcement learning to directly optimize a neural codec for intelligibility.