Search papers, labs, and topics across Lattice.
10 papers from Meta AI (FAIR) on Training Efficiency & Optimization
Ditch the data augmentation and decoders: R2-Dreamer's Barlow Twins-inspired objective delivers faster, more versatile MBRL, especially when spotting the small stuff matters.
Pixel-space diffusion models get a serious boost: V-Co reveals a simple recipe for visual co-denoising that outperforms existing methods on ImageNet-256 with fewer training epochs.
Straightening latent space trajectories with a simple curvature regularizer dramatically improves the stability and success of gradient-based planning in world models.
Forget imbalanced LoRA usage: ReMix leverages reinforcement learning to route effectively among LoRAs, boosting performance in parameter-efficient fine-tuning.
SSL models can be backdoored with nearly undetectable triggers that still achieve high attack success rates, even against common defenses.
Instruction-following in large reasoning models gets a serious upgrade with RAIN-Merging, a gradient-free technique that merges in instruction-tuned capabilities without wrecking the model's ability to think step-by-step.
LLMs can achieve the same accuracy with 16x less data by constraining their hidden-state trajectories to follow geodesics on a semantic manifold.
Randomly throwing distortions at your watermarking model during training? Meta-FC shows meta-learning a better way, boosting robustness by up to 4.71% against combined distortions.
A deterministic client selection method leveraging gradient updates can boost federated learning accuracy by nearly 50% in heterogeneous environments.
Achieve up to 39.6% FLOP reduction in LLM inference without retraining or architectural changes using QuickSilver's dynamic token-level optimizations.