Search papers, labs, and topics across Lattice.
4
0
10
5
RLVR models exhibit "Early Correctness Coherence" under noisy supervision, suggesting a surprising opportunity for self-correction via dynamic label refinement.
Data laundering can't hide forever: this new technique lets rights owners detect misuse of their content in LLMs, even when the data has been heavily transformed.
LVLMs can now better judge their own vision-based answers, thanks to a new method that focuses on how much they actually "see" in the image.
Forget tweaking prompts – understanding how retrieved context warps an LLM's hidden states is the key to unlocking better RAG performance.