Search papers, labs, and topics across Lattice.
Nanyang Technological University
4
0
8
0
Automatically exploiting web application vulnerabilities is now significantly more feasible, with AutoEG achieving over 80% success where previous methods struggled to reach 33%.
Humans are surprisingly vulnerable to deception by compromised LLM agents, with less than 10% detecting attacks even in high-stakes scenarios.
LLMs and LVLMs share more than half their top-activated neurons during multi-step inference, opening a surprisingly cheap path to boost vision-language reasoning by transplanting skills from text-only models.
LLM-powered pentesting agents fail not because of model limitations, but because they can't estimate task difficulty, leading to wasted effort and premature context exhaustion.