Search papers, labs, and topics across Lattice.
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China
3
0
7
7
Even the strongest LLM judges can be easily fooled by seemingly high-quality reasoning chains, highlighting a critical vulnerability in using LLMs to evaluate other LLMs.
Neural retrievers' preference for LLM-generated text isn't an inherent flaw, but rather a learned bias from artifacts present in training data, offering a path to debiasing without architectural changes.
Multimodal embeddings get a serious upgrade with CoCoA, a new pre-training method that forces models to compress all input information into a single token for reconstruction, leading to substantial quality gains.