Search papers, labs, and topics across Lattice.
University of Science and Technology of China
4
0
6
0
LLMs get a reasoning boost by treating information extraction not as a one-off task, but as a dynamic cache that persists and filters information across multiple steps.
Achieve personalized generation with cloud-scale reasoning while preserving user privacy, thanks to a novel asymmetric collaboration framework that's also 2x faster.
CoT reasoning can hurt recommender performance by drowning out important ID signals – unless you compress reasoning chains and use bias-subtracted contrastive decoding to realign the inference subspace.
Platform-centric digital services may be prioritizing engagement over user well-being, but LLMs and on-device intelligence now make truly user-centric agents a feasible alternative.