Search papers, labs, and topics across Lattice.
Nanyang Technological University
3
0
7
Humans are surprisingly vulnerable to deception by compromised LLM agents, with less than 10% detecting attacks even in high-stakes scenarios.
LLMs and LVLMs share more than half their top-activated neurons during multi-step inference, opening a surprisingly cheap path to boost vision-language reasoning by transplanting skills from text-only models.
LLM-powered pentesting agents fail not because of model limitations, but because they can't estimate task difficulty, leading to wasted effort and premature context exhaustion.