Search papers, labs, and topics across Lattice.
8
0
11
0
Even the best LLMs struggle to effectively discover, refine, and reuse skills over a lifetime of experience, suggesting current benchmarks significantly overestimate real-world agentic capabilities.
LLM datasets aren't independent islands: tracing their lineage reveals hidden redundancy, benchmark contamination, and opportunities for more diverse training data.
RL fine-tuning of hybrid autoregressive-diffusion models can be made significantly more stable and effective by averaging gradients across multiple diffusion trajectories and filtering autoregressive tokens for consistency.
Current LLM efficiency metrics fail to capture the true cost of tool use, as measured by wall-clock latency, but a new hardware-aware metric closes the gap.
Feel what the robot feels: a new glove lets human operators experience high-resolution tactile feedback during dexterous teleoperation, dramatically improving performance in contact-rich tasks.
Unlock 5x faster autoregressive image generation by using a single entropy signal to simultaneously optimize draft prediction and enable single-step diffusion decoding.
Ditch the manual feature engineering: KMLP's hybrid KAN-gMLP architecture automatically learns complex feature transformations and interactions, outperforming GBDTs on web-scale tabular data.
Achieve 90%+ accuracy in reusing enterprise workflows by breaking down platform-specific DSLs into standardized, modular components that can be intelligently reassembled.