Search papers, labs, and topics across Lattice.
Rutgers University
5
0
11
10
Stop benchmarking agent components in isolation; AgentSelect lets you train models to recommend *entire* agent configurations based on narrative queries.
LLMs respond to increasingly difficult out-of-distribution inputs by activating sparser representations in their last hidden states, revealing a quantifiable relationship between task difficulty and neural activity.
LLMs can mimic your style, but your friends can still tell it's not really you, especially when it comes to your opinions.
VLMs get a 24% performance boost and run 56% faster on robot manipulation tasks by explicitly modeling action advantages and exploring multiple future paths, instead of relying on noisy foresight predictions.
Ditch the rigid safety codes: case-augmented reasoning unlocks safer, more helpful LLMs that are also more robust to attacks.