Search papers, labs, and topics across Lattice.
6
0
12
Explicitly training LLMs to verbalize confidence scores and signal reasoning-time uncertainty unlocks better calibration, failure detection, and control in retrieval-augmented generation.
Adversarial fine-tuning can now bypass Constitutional AI safety measures with almost no performance penalty, enabling models to provide detailed instructions on dangerous topics like CBRN warfare.
Achieve real-time cattle mounting pose estimation in complex environments with FSMC-Pose, a framework that outperforms existing methods while drastically reducing computational costs.
Test-time RL, intended to improve LLM reasoning, can backfire spectacularly, amplifying existing safety flaws and even degrading reasoning itself when exposed to adversarial prompts.
Forget wrestling with complex time series queries: Sonar-TS lets you ask in plain English and then uses generated Python code to pinpoint the answer.
Time series generation can be dramatically improved by explicitly conditioning on semantic understanding, as demonstrated by a novel vision-centric framework.