Search papers, labs, and topics across Lattice.
Forget contrastive learning: LLM2Vec-Gen learns text embeddings by representing the *response* an LLM would generate, unlocking safety and reasoning abilities for embedding tasks.
Democratized LLM pre-training is now a reality: Covenant-72B proves you can train a competitive 72B model with untrusted peers over the internet, opening the door to broader participation and reduced costs.
Takeuchi's Information Criterion (TIC) accurately predicts DNN generalization gaps, but only when models operate near the Neural Tangent Kernel (NTK) regime.
Forget full fine-tuning: this dynamic routing strategy lets you adapt dense retrieval to new domains while using just 2% of the parameters.
Forget retraining from scratch: port fine-tuning updates between LLM versions and get up to 47% performance boost on tasks like instruction following, even surpassing fully fine-tuned models.
Ditch the greedy heuristics: GFlowNets can learn to sample decision trees from the Bayesian posterior, outperforming standard methods and scaling consistently with ensemble size.
Self-supervised learning beats supervised learning for ECG interpretation when labeled data is scarce, unlocking more robust and generalizable AI-driven cardiac diagnostics.
Forget Bayesian bells and whistles: in-context learning shines brightest with simple point estimators, outperforming complex posterior approximations in most scenarios.