Search papers, labs, and topics across Lattice.
University of Science and Technology of China
3
0
6
18
LLMs get a reasoning boost by treating information extraction not as a one-off task, but as a dynamic cache that persists and filters information across multiple steps.
Achieve personalized generation with cloud-scale reasoning while preserving user privacy, thanks to a novel asymmetric collaboration framework that's also 2x faster.
Forget complex model architectures for cross-domain recommendation: Taesar shows that cleverly transforming your data can unlock better performance with standard models.