Search papers, labs, and topics across Lattice.
3
0
4
2
Turns out, teaching LLMs to *think* like reverse engineers beats just throwing more parameters at the problem of binary deobfuscation.
LLMs struggle to build complete software from scratch, with even the best models failing more than half the time on a new CLI tool generation benchmark.
Run code LLMs 10x faster and with 6x less memory on your laptop without sacrificing accuracy, thanks to a new quantization and compilation approach.