Search papers, labs, and topics across Lattice.
2
0
6
0
Traditional text embedding benchmarks fail to capture the nuances of long-horizon memory retrieval, but this new benchmark reveals that bigger models don't always win, and performance on standard tasks doesn't guarantee success in complex, context-dependent memory scenarios.
LLMs' vulnerability to adversarial prefixes isn't just about lacking safety training data, but a deeper problem of "semantic representation decay" that a causal approach can fix.