Search papers, labs, and topics across Lattice.
4
1
8
Poisoning a personal AI agent's Capability, Identity, or Knowledge triples its vulnerability to real-world attacks, even in the most robust models.
Forget prompting monolithic models – ImageEdit-R1 uses reinforcement learning to orchestrate a team of specialized agents, outperforming even closed-source diffusion models on complex image editing tasks.
Skip the expensive supervised fine-tuning: this RL-only method teaches LLMs to use tools by showing them how in-context, then gradually removing the crutches until they're tool-using pros in zero-shot.
RLHF struggles with long contexts because the reward signal for *finding* the right information vanishes, but can be revived by directly rewarding the model for selecting relevant context.