Search papers, labs, and topics across Lattice.
7
1
10
0
Turns out, you can bootstrap better formal specification synthesis by training on the iterative refinement trajectories of a traceable specification generator, leading to substantial gains in both specification accuracy and general reasoning.
Securing AI agents demands a new security paradigm, as their integration of LLMs with traditional systems introduces vulnerabilities beyond those of standard software.
Autonomous AI agents that can independently sustain and extend their operation are closer than we think, but raise thorny security and governance questions we need to address now.
Human-written solutions can actually *hurt* model performance on math problems, highlighting a critical gap between strategy usage and executability that Selective Strategy Retrieval (SSR) effectively bridges.
Stop struggling with ad-hoc codebases: dLLM offers a unified, open-source framework to reproduce, fine-tune, and build diffusion language models, even from BERT-style encoders.
Now you can audit black-box LLM APIs for cheating (model substitution, overbilling) with <1% overhead, using verifiable computation.
LLMs can now autonomously design and build better-performing agents using OpenSage, an agent development kit that lets them self-generate agent topology, toolsets, and memory structures.