Search papers, labs, and topics across Lattice.
2
0
7
You can detect prompt injection attacks in screenshot-based web agents with 8x speedup and no extra memory by looking for telltale visual "smoothness" and reversed text polarity.
LLMs can be made more accurate *and* more trustworthy with a clever post-training method that selectively amplifies only the reasoning steps that progressively build confidence in the correct answer.