Search papers, labs, and topics across Lattice.
University of Science and Technology of China
2
0
6
46
LLMs get a reasoning boost by treating information extraction not as a one-off task, but as a dynamic cache that persists and filters information across multiple steps.
Asymmetric encoders, trained with a novel two-stage approach, can beat symmetric LLM-based models in Chinese medical text retrieval while maintaining low latency.