Search papers, labs, and topics across Lattice.
Fundamental AI research lab pursuing artificial general intelligence to benefit humanity. Known for AlphaGo, AlphaFold, and Gemini.
35
90
3
Training LLMs to optimize for conflicting objectives between the final output and the reasoning process can significantly degrade the monitorability of Chain-of-Thought, making oversight more difficult.
Refining generative models with discriminator guidance provably improves generalization, offering a theoretical justification for techniques like score-based diffusion.
Forget finetuning: DynaEdit unlocks complex video edits like action modification and object insertion, all without training, using clever manipulation of pretrained text-to-video models.
Forget fine-tuning: surprisingly, single neuron activations in VLMs can be directly probed to create classifiers that outperform the full model, with 5x speedups.
Forget black-box policies: CSRO uses LLMs to generate human-readable code policies in multi-agent RL, achieving performance competitive with traditional methods.
LLMs get *more* honest when they have time to reason, defying human tendencies and revealing surprising insights about their internal representational geometry.
Mixture-of-Experts models might be hiding more of their reasoning than we thought, thanks to a newly quantified "opaque serial depth" metric.
LLM-powered diagnostic AI is ready for prime time: a real-world clinical trial shows it's safe, patients love it, and doctors find it useful.
Ditch the slow sampling dance of diffusion models: Variational Flow Maps let you condition image generation in a single pass by learning the right initial noise.
Achieve significantly better code generation and mathematical problem solving from diffusion language models with a simple, training-free sampling tweak that encourages diversity.
LLMs can drastically accelerate robot planning in cluttered environments by injecting common-sense priors about object locations and co-occurrences, slashing planning time by up to 72% in real-world experiments.
Cracking DNNs is now easier than ever: Kraken extracts parameters from GPU Tensor Cores via near-field EM attacks and even sniffs LLM weights from a meter away.
LLMs are becoming "epistemic agents" that shape our knowledge environment, so we need a new framework for evaluating and governing them based on trustworthiness, not just performance.
Frontier models are surprisingly good at taking actions at extremely low, calibrated probabilities, raising concerns about their ability to evade pre-deployment safety evaluations designed to catch malicious behavior.
DINOv2's impressive unimodal performance doesn't translate to cross-modal understanding, but a simple training tweak can align embeddings across RGB, depth, and segmentation without sacrificing feature quality.
Forget slow prefix trees: STATIC unlocks massive speedups (up to 1033x) for constrained LLM decoding on GPUs/TPUs by vectorizing trie traversals into sparse matrix operations.
Unlock asymptotically normal and semiparametrically efficient estimators in adaptive data collection by using a novel target-specific condition called "directional stability," which is weaker than previous target-agnostic conditions.
Existing deforestation monitoring maps misclassify smallholder agroforestry as "forest," risking unfair penalties under regulations like the EUDR.
LLMs can be taught to proactively seek and effectively use conversational feedback, generalizing across tasks and improving their ability to handle ambiguity.
LLMs can autonomously discover novel MARL algorithms that outperform hand-designed baselines, revealing untapped potential in automated algorithm design.
Forget scaling laws: teaching LLMs to learn from feedback lets smaller models rival giants and generalize to new tasks.
Language models organize concepts like months and years into surprisingly clean geometric structures because of hidden symmetries in language statistics, even when those statistics are heavily perturbed.
Robots can now learn long-horizon tasks far more effectively by distilling complex histories into a few key visual moments, outperforming standard imitation learning by 70% on real-world tasks.
Boost macrocycle generation rates from 1% to 99% by guiding diffusion models with persistent homology, opening new avenues for drug discovery.
Forget scaling laws: the secret to AGI might be teaching AI to argue with itself through high-quality conversational scaffolds.
Forget rigid heuristics: this adaptive AI delegation framework dynamically adjusts task allocation, authority transfer, and trust-building, promising more robust agentic systems.
People prefer AI advisors, but AI delegates that autonomously negotiate on their behalf actually lead to higher individual gains and improve overall group welfare in multi-party bargaining games.
A redesigned AlphaFold Protein Structure Database offers improved usability and expanded structural coverage, making high-accuracy protein structure predictions even more accessible.
Reasoning-based safety guardrails, once thought to be a strong defense against jailbreaks, crumble with just a few strategically placed tokens.
AlphaFold didn't just solve protein structure prediction; it unlocked a new era of biological discovery, making nearly the entire genome structurally accessible.
Ditch the high-fidelity simulator: IRL-VLA uses a lightweight reward world model trained with inverse reinforcement learning to enable efficient and effective closed-loop RL training for autonomous driving.
DPO's success isn't just clever engineering鈥攊t's deeply rooted in human choice theory, unlocking a surprisingly flexible framework for preference optimization and justifying many DPO extensions.
Ditch reward models: Nash Mirror Prox achieves fast, stable convergence to a Nash equilibrium directly from human preferences, sidestepping the limitations of traditional RLHF.
LVLMs struggle to navigate cultural nuances, with even the best models achieving only 62% awareness and 38% compliance on a new benchmark spanning 16 countries.
AlphaFold3 doesn't just predict single protein structures; it tackles the messy reality of biomolecular interactions, from protein-protein binding to protein-nucleic acid complexes, opening new doors for drug discovery and genomic research.