Search papers, labs, and topics across Lattice.
15 papers from MIT CSAIL on Scientific Discovery & Drug Design
Hyperpolarizing the nuclear spin bath surrounding a molecular qubit can significantly extend its coherence time, offering a new knob for quantum control.
R茅nyi divergence may be the missing key to understanding thermal equilibrium in quantum systems, revealing a novel constraint on wavefunction ensembles.
Heuristic maritime routes lead to extreme fuel waste in nearly 5% of voyages, but this RL approach cuts that risk by almost 10x.
Neural networks can accurately predict polymer free energies, even when traditional methods like Bennett Acceptance Ratio fail due to poor phase-space overlap.
Watch out, human scientists: autonomous agents are now coordinating distributed scientific discovery through emergent artifact exchange, potentially accelerating research across domains.
Lattice QCD calculations just got a whole lot faster: normalizing flows slash variance by up to 60x in key observables.
LLMs struggle to reliably predict numerical materials properties, even after fine-tuning, and their performance fluctuates wildly over time, casting doubt on their use in high-stakes scientific applications.
Nightly hospital planning is now possible on a laptop: this work distills slow, complex agent-based epidemic models into fast, trustworthy surrogate models using neural ODEs, achieving a 10,000x speedup.
E(3)-equivariant networks just got a whole lot faster: a new algorithm cuts the complexity of Clebsch-Gordan Tensor Products from $O(L^6)$ to $O(L^4\log^2 L)$ without sacrificing completeness.
By aligning a generative flow network with physics-based stability proxies via reinforcement learning, PackFlow drastically improves the efficiency of molecular crystal structure prediction, offering a practical route to circumvent the costly relax-and-rank bottleneck.
Randomly initialized encoders can match state-of-the-art pre-trained models on many ECG representation learning tasks, suggesting current benchmarks are misleading.
Ditch the geometry-to-property map: this work uses the external potential as the primary input for machine learning models, unlocking a scalable and equivariant approach to predicting electronic structure.
Ditch the equivariant constraints: canonicalization lets you train simpler, faster diffusion models that actually *outperform* equivariant architectures for symmetric generative tasks like 3D molecule design.
Injecting spatial transcriptomics data into existing pathology foundation models unlocks significant performance gains across a range of downstream tasks, including molecular status prediction and gene-to-image retrieval.
Self-supervised learning beats supervised learning for ECG interpretation when labeled data is scarce, unlocking more robust and generalizable AI-driven cardiac diagnostics.