Search papers, labs, and topics across Lattice.
2
0
5
Achieve up to 10.6% performance gains in prompt highlighting while halving fluency costs by steering LLMs with PRISM-Δ, a method that decomposes cross-covariance differences to find discriminative directions.
Multimodal embeddings get a serious upgrade with CoCoA, a new pre-training method that forces models to compress all input information into a single token for reconstruction, leading to substantial quality gains.