Search papers, labs, and topics across Lattice.
3
0
6
1
Forget simple scaling laws: the compute-optimal number of parallel rollouts in LLM RL plateaus, revealing distinct mechanisms for easy vs. hard problems.
Achieve scalable multi-task robot learning without catastrophic forgetting by isolating task-specific knowledge in LoRA experts routed by a dynamic inference engine.
CNNs' superior generalization isn't just about architecture; locality and weight sharing fundamentally reshape implicit regularization, allowing them to bypass the curse of dimensionality on difficult distributions where fully connected networks fail.