Search papers, labs, and topics across Lattice.
Huazhong University of Science and Technology
3
0
8
0
R2IF achieves up to 34.62% better performance in function calling accuracy, bridging the gap between reasoning and decision-making in LLMs.
RAG systems are far more vulnerable to subtle, word-level poisoning attacks than previously thought, achieving 90% success rates even against black-box models.
Merging seemingly safe LLMs can create dangerously misaligned models, thanks to a new "TrojanMerge" attack that exploits latent vulnerabilities.