Search papers, labs, and topics across Lattice.
Cohere
2
0
5
LLMs' harmful outputs stem from a surprisingly compact and unified set of weights, suggesting a fundamental, addressable structure underlying even emergent misalignment.
LLM performance isn't just about size, but about how efficiently they compress information during training, offering a new lens for understanding and predicting model capabilities.