Search papers, labs, and topics across Lattice.
4
0
9
0
End-to-end prompt optimization is often a waste of time and money, succeeding only when coaxing models into specific output formats they're already capable of.
Expert-written rules for coding agents are often useless or even harmful, with random constraints working just as well and negative constraints outperforming positive directives.
Online reinforcement fine-tuning can measurably improve the safety and efficiency of diffusion-based multi-agent driving planners, even when starting from strong pre-trained models.
Decentralized agent societies can self-govern and align with human intent without top-down rules, thanks to a blockchain-based "Separation of Power" architecture that enforces accountability.