Search papers, labs, and topics across Lattice.
7
0
12
0
Stop wasting bandwidth on irrelevant tokens: Fed-FSTQ uses Fisher information to selectively quantize and transmit only the most important tokens, slashing communication costs in federated LLM fine-tuning by up to 46x.
By injecting basic physics, this method achieves up to 9% accuracy gains in human activity recognition, proving that inductive biases still matter for real-world sensor data.
A novel framework ensures that freight negotiations remain competitive and compliant with pricing dynamics, achieving high agreement rates without sacrificing decision transparency.
LLM judges can be subtly manipulated by framing the consequences of their decisions, leading to biased evaluations even when the content being judged remains constant.
Forget hand-crafted templates: DUET learns to generate user and item profiles jointly, boosting recommendation accuracy by better aligning textual representations.
Stop obsessing over state prediction accuracy in text-based world models: aligning them with *behavior* yields better long-term planning and evaluation.
Achieve robust, high-fidelity personalization with a reduced token budget by dynamically evolving memory and self-learning with context distillation.