Search papers, labs, and topics across Lattice.
Max Planck Institute for Software Systems
2
0
6
5
You can slash RoPE memory costs by 10x without sacrificing convergence, just by applying it to a sliver (10%) of hidden dimensions.
LLMs exhibit surprisingly strong and predictable biases towards specific information sources, even overriding content relevance and explicit instructions.