Search papers, labs, and topics across Lattice.
8 papers from Google DeepMind on Architecture Design (Transformers, SSMs, MoE)
Refining generative models with discriminator guidance provably improves generalization, offering a theoretical justification for techniques like score-based diffusion.
Mixture-of-Experts models might be hiding more of their reasoning than we thought, thanks to a newly quantified "opaque serial depth" metric.
Ditch the slow sampling dance of diffusion models: Variational Flow Maps let you condition image generation in a single pass by learning the right initial noise.
DINOv2's impressive unimodal performance doesn't translate to cross-modal understanding, but a simple training tweak can align embeddings across RGB, depth, and segmentation without sacrificing feature quality.
Physics-informed neural operators can now generalize to unseen physical regimes and extrapolate in time by explicitly encoding the underlying operator structure and decomposing PDEs into interpretable components.
AlphaFold didn't just solve protein structure prediction; it unlocked a new era of biological discovery, making nearly the entire genome structurally accessible.
AlphaFold3 doesn't just predict single protein structures; it tackles the messy reality of biomolecular interactions, from protein-protein binding to protein-nucleic acid complexes, opening new doors for drug discovery and genomic research.
AlphaFold's ability to predict protein structures is revolutionizing structural biology, making previously intractable research avenues now accessible.