Search papers, labs, and topics across Lattice.
AI for scientific research, protein structure prediction, drug discovery, materials science, and climate modeling.
#9 of 24
2
Forget static retrieval: FlowPIE's flow-guided literature exploration and evolutionary idea generation unlocks more novel, feasible, and diverse scientific ideas.
Automating detector design with AI can dramatically speed up scientific discovery by intelligently exploring complex parameter spaces.
Multimodal deep learning models for cancer prognosis may not be synergizing information across modalities as much as we think; better performance seems to come from simply adding complementary signals.
Physiological synchrony in medical teams doesn't always signal success; it's the *context* of shared discovery versus shared uncertainty that determines whether it predicts effective collaboration.
Adding MRI data to histopathology and gene expression modestly improves glioma survival prediction, but only when combined effectively in a trimodal deep learning model.
Quantum biosensors are evolving through four distinct generations, each leveraging progressively more exotic quantum phenomena to transcend classical limitations and enable adaptive inference directly within the quantum domain.
LLMs can semi-autonomously solve complex, unpublished problems in mathematical physics, even discovering unique structures in integrable models.
Automating scientific discovery is now more accessible: Owl-AuraID navigates proprietary GUIs to control diverse precision instruments, freeing researchers from tedious manual operation.
End-to-end retrosynthetic planning, previously reliant on fragmented prediction-search hybrids, now achieves state-of-the-art performance thanks to a unified, reasoning-driven generative framework.
Multi-agent systems for automated research face a fundamental trade-off: parallel exploration offers speed and stability, while expert teams unlock deeper reasoning at the cost of increased fragility.
Uncover hidden conceptual gaps in your AI: "concept frustration" reveals when your model's internal reasoning clashes with human understanding, paving the way for safer, more interpretable AI.
AI can now design better AI: ASI-Evolve discovers SOTA architectures, datasets, and RL algorithms, outperforming human-designed baselines by significant margins.
Forget attention: Metriplectic dynamics offer a surprisingly effective and parameter-efficient route to neural computation, outperforming standard architectures in several domains.
Overcoming the challenge of limited and inconsistent imaging criteria for perineural invasion (PNI) diagnosis, NeoNet achieves state-of-the-art prediction accuracy by generating synthetic training data with a 3D Latent Diffusion Model.
An RL-aligned LLM can outperform expert toxicologists in identifying ingested substances from heterogeneous clinical data, suggesting a path to AI-assisted decision-making in high-stakes medical environments.
Smart hospital research is converging towards integrated ecosystems where AI, trust, and infrastructure reinforce each other, but real-world implementation and governance are lagging.
Predicting adolescent substance use initiation gets a boost from NeuroBRIDGE, a new method that dynamically models brain network changes over time and with behavior.
By directly optimizing clinical dose-volume histogram (DVH) metrics, this method produces 3D dose predictions that more closely align with clinical treatment planning criteria than traditional voxel-wise approaches.
Radio astronomy-aware self-supervised pre-training beats out-of-the-box Vision Transformers for transfer learning on radio astronomy morphology tasks.
Turn semantic segmentation into hyperspectral unmixing with a surprisingly simple pipeline that leverages polyhedral-cone partitioning, outperforming existing deep and non-deep methods.
Expert ordinal comparisons reveal that fusing vision and language in wound representation learning boosts agreement by 5.6% over unimodal foundation models for a rare genetic skin disorder.
Achieve HPC acceleration by emulating FP64 operations with INT8 precision on GPUs, proving that you can boost performance *and* accuracy.
Datacenter simulations can now combine multiple independent models to better predict performance and climate impact, addressing limitations of single-model approaches.
Unlock 600,000x faster TSV design by replacing computationally expensive full-wave simulations with physics-informed graph neural networks.
A new TDDFT method using a non-Aufbau reference state sidesteps common failures of DFT for near-degenerate electronic structures, but at the cost of new numerical instabilities.
An AI agent can now autonomously design functional antibodies with nanomolar affinities from text prompts, achieving a 67% success rate in lab validation and accelerating expert workflows by 56x.
Forget the cold start: training transformers for protein structure prediction peaks at intermediate temperatures, revealing a sweet spot in the loss landscape.
Negative electronic friction, often attributed to simple Joule heating, actually masks significant non-Markovian dynamics that can destabilize standard models.
Extracting band-edge eigenstates becomes surprisingly simple and efficient, needing only a quasi-purified density matrix and a handful of matrix multiplications.
Unlock the full picture of complex molecular dynamics with a new technique that extrapolates complete 2D spectra from short-lived data, slashing experimental costs and noise.
Default mixing rules in implicit solvation models can lead to unphysical ion accumulation at electrochemical interfaces, but can be fixed with better parameterization.
Pentacene dimers could unlock more sensitive nanoscale NMR and AC magnetic field detection, outperforming traditional pentacene monomers in detecting small nuclear spin ensembles.
Forget perturbation theory: this dissipaton-based approach efficiently models heat transport in locally probed systems with strong many-body effects.
Twisted bilayer graphene enables the creation of parallel and configurable logic gates by exploiting layer-selective hydrogenation and proton transport.
Calculating excited states of molecules with thousands of atoms, previously a computational bottleneck, is now practical on a single GPU thanks to a new implementation of TDDFT-risp.
Representing chemical reactions through electron redistribution, rather than geometry, unlocks a transferable and physically grounded approach to reaction sampling.
Forget "spread" voicings: skewness is the key to clarity in piano chords, offering a fresh perspective on psychoacoustic principles.
Current vibration-based alert systems often misestimate alert durations due to poor damping estimates, but this new information-theoretic method can accurately capture alert duration.
Existing object detection models stumble when faced with the morphological diversity of cells in high-resolution, whole-brain microscopy data, revealing a critical gap in their generalization ability.
Brain-inspired AI gets a boost: a new graph neural network fuses structural and functional brain data to predict cognitive function better than ever before.
Physics-informed neural networks can now accurately identify impact events on aerospace composites, even with noisy or incomplete data, opening the door to real-time structural health monitoring.
Anticancer drugs, whether organic or inorganic, can now be understood through a single unified representation, unlocking knowledge transfer between previously siloed chemical domains.
Ditching Markovian constraints unlocks surprisingly better discrete generation, with simplex denoising outperforming diffusion and flow-matching on graphs.
Ventricular dysfunction can be surprisingly well-predicted in a zero-shot manner from ECG diagnostic probabilities, suggesting a structured encoding of cardiac function within these representations.
A clustering-based feature selection algorithm rivals the accuracy of slower, more complex methods, offering a sweet spot of speed and performance for high-dimensional biological data.
Achieve state-of-the-art brain tumor classification accuracy by intelligently weighting the decisions of diverse deep learning and traditional machine learning models.
Neural networks can turbocharge classical optimization for high-dimensional matrix estimation, achieving faster convergence without sacrificing theoretical guarantees.
Classical models of hydrogen storage in geological formations fall apart when applied to diverse samples, but this physics-informed neural network nails it, achieving R2 = 0.9544.
Precisely control and augment 3D biomedical shapes with a new stochastic interpolant framework, enabling better uncertainty quantification in simulations.
Imperfect quantum data won't stop machine learning models: this work shows how unsupervised domain adaptation on classical shadows can bridge the gap.