Search papers, labs, and topics across Lattice.
100 papers published across 8 labs.
Forget static retrieval: FlowPIE's flow-guided literature exploration and evolutionary idea generation unlocks more novel, feasible, and diverse scientific ideas.
Automating detector design with AI can dramatically speed up scientific discovery by intelligently exploring complex parameter spaces.
Multimodal deep learning models for cancer prognosis may not be synergizing information across modalities as much as we think; better performance seems to come from simply adding complementary signals.
Physiological synchrony in medical teams doesn't always signal success; it's the *context* of shared discovery versus shared uncertainty that determines whether it predicts effective collaboration.
Adding MRI data to histopathology and gene expression modestly improves glioma survival prediction, but only when combined effectively in a trimodal deep learning model.
Forget static retrieval: FlowPIE's flow-guided literature exploration and evolutionary idea generation unlocks more novel, feasible, and diverse scientific ideas.
Automating detector design with AI can dramatically speed up scientific discovery by intelligently exploring complex parameter spaces.
Multimodal deep learning models for cancer prognosis may not be synergizing information across modalities as much as we think; better performance seems to come from simply adding complementary signals.
Physiological synchrony in medical teams doesn't always signal success; it's the *context* of shared discovery versus shared uncertainty that determines whether it predicts effective collaboration.
Adding MRI data to histopathology and gene expression modestly improves glioma survival prediction, but only when combined effectively in a trimodal deep learning model.
Quantum biosensors are evolving through four distinct generations, each leveraging progressively more exotic quantum phenomena to transcend classical limitations and enable adaptive inference directly within the quantum domain.
LLMs can semi-autonomously solve complex, unpublished problems in mathematical physics, even discovering unique structures in integrable models.
Automating scientific discovery is now more accessible: Owl-AuraID navigates proprietary GUIs to control diverse precision instruments, freeing researchers from tedious manual operation.
End-to-end retrosynthetic planning, previously reliant on fragmented prediction-search hybrids, now achieves state-of-the-art performance thanks to a unified, reasoning-driven generative framework.
Multi-agent systems for automated research face a fundamental trade-off: parallel exploration offers speed and stability, while expert teams unlock deeper reasoning at the cost of increased fragility.
Uncover hidden conceptual gaps in your AI: "concept frustration" reveals when your model's internal reasoning clashes with human understanding, paving the way for safer, more interpretable AI.
AI can now design better AI: ASI-Evolve discovers SOTA architectures, datasets, and RL algorithms, outperforming human-designed baselines by significant margins.
Forget attention: Metriplectic dynamics offer a surprisingly effective and parameter-efficient route to neural computation, outperforming standard architectures in several domains.
Overcoming the challenge of limited and inconsistent imaging criteria for perineural invasion (PNI) diagnosis, NeoNet achieves state-of-the-art prediction accuracy by generating synthetic training data with a 3D Latent Diffusion Model.
An RL-aligned LLM can outperform expert toxicologists in identifying ingested substances from heterogeneous clinical data, suggesting a path to AI-assisted decision-making in high-stakes medical environments.
Smart hospital research is converging towards integrated ecosystems where AI, trust, and infrastructure reinforce each other, but real-world implementation and governance are lagging.
Predicting adolescent substance use initiation gets a boost from NeuroBRIDGE, a new method that dynamically models brain network changes over time and with behavior.
By directly optimizing clinical dose-volume histogram (DVH) metrics, this method produces 3D dose predictions that more closely align with clinical treatment planning criteria than traditional voxel-wise approaches.
Radio astronomy-aware self-supervised pre-training beats out-of-the-box Vision Transformers for transfer learning on radio astronomy morphology tasks.
Turn semantic segmentation into hyperspectral unmixing with a surprisingly simple pipeline that leverages polyhedral-cone partitioning, outperforming existing deep and non-deep methods.
Expert ordinal comparisons reveal that fusing vision and language in wound representation learning boosts agreement by 5.6% over unimodal foundation models for a rare genetic skin disorder.
Achieve HPC acceleration by emulating FP64 operations with INT8 precision on GPUs, proving that you can boost performance *and* accuracy.
Datacenter simulations can now combine multiple independent models to better predict performance and climate impact, addressing limitations of single-model approaches.
Unlock 600,000x faster TSV design by replacing computationally expensive full-wave simulations with physics-informed graph neural networks.
A new TDDFT method using a non-Aufbau reference state sidesteps common failures of DFT for near-degenerate electronic structures, but at the cost of new numerical instabilities.
An AI agent can now autonomously design functional antibodies with nanomolar affinities from text prompts, achieving a 67% success rate in lab validation and accelerating expert workflows by 56x.
Forget the cold start: training transformers for protein structure prediction peaks at intermediate temperatures, revealing a sweet spot in the loss landscape.
Negative electronic friction, often attributed to simple Joule heating, actually masks significant non-Markovian dynamics that can destabilize standard models.
Extracting band-edge eigenstates becomes surprisingly simple and efficient, needing only a quasi-purified density matrix and a handful of matrix multiplications.
Unlock the full picture of complex molecular dynamics with a new technique that extrapolates complete 2D spectra from short-lived data, slashing experimental costs and noise.
Default mixing rules in implicit solvation models can lead to unphysical ion accumulation at electrochemical interfaces, but can be fixed with better parameterization.
Pentacene dimers could unlock more sensitive nanoscale NMR and AC magnetic field detection, outperforming traditional pentacene monomers in detecting small nuclear spin ensembles.
Forget perturbation theory: this dissipaton-based approach efficiently models heat transport in locally probed systems with strong many-body effects.
Twisted bilayer graphene enables the creation of parallel and configurable logic gates by exploiting layer-selective hydrogenation and proton transport.
Calculating excited states of molecules with thousands of atoms, previously a computational bottleneck, is now practical on a single GPU thanks to a new implementation of TDDFT-risp.
Representing chemical reactions through electron redistribution, rather than geometry, unlocks a transferable and physically grounded approach to reaction sampling.
Forget "spread" voicings: skewness is the key to clarity in piano chords, offering a fresh perspective on psychoacoustic principles.
Current vibration-based alert systems often misestimate alert durations due to poor damping estimates, but this new information-theoretic method can accurately capture alert duration.
Existing object detection models stumble when faced with the morphological diversity of cells in high-resolution, whole-brain microscopy data, revealing a critical gap in their generalization ability.
Brain-inspired AI gets a boost: a new graph neural network fuses structural and functional brain data to predict cognitive function better than ever before.
Physics-informed neural networks can now accurately identify impact events on aerospace composites, even with noisy or incomplete data, opening the door to real-time structural health monitoring.
Anticancer drugs, whether organic or inorganic, can now be understood through a single unified representation, unlocking knowledge transfer between previously siloed chemical domains.
Ditching Markovian constraints unlocks surprisingly better discrete generation, with simplex denoising outperforming diffusion and flow-matching on graphs.
Ventricular dysfunction can be surprisingly well-predicted in a zero-shot manner from ECG diagnostic probabilities, suggesting a structured encoding of cardiac function within these representations.
A clustering-based feature selection algorithm rivals the accuracy of slower, more complex methods, offering a sweet spot of speed and performance for high-dimensional biological data.
Achieve state-of-the-art brain tumor classification accuracy by intelligently weighting the decisions of diverse deep learning and traditional machine learning models.
Neural networks can turbocharge classical optimization for high-dimensional matrix estimation, achieving faster convergence without sacrificing theoretical guarantees.
Classical models of hydrogen storage in geological formations fall apart when applied to diverse samples, but this physics-informed neural network nails it, achieving R2 = 0.9544.
Precisely control and augment 3D biomedical shapes with a new stochastic interpolant framework, enabling better uncertainty quantification in simulations.
Imperfect quantum data won't stop machine learning models: this work shows how unsupervised domain adaptation on classical shadows can bridge the gap.
A simple DBSCAN model running on real-time bridge sensor data can outperform other ML models in detecting anomalies, suggesting a practical path to preventing catastrophic failures.
Differentiable Power-Flow unlocks scalable, gradient-based optimization for power grid management, outperforming traditional methods and enabling new applications like real-time contingency analysis.
Unlock hidden predictive power: NLP on unstructured clinical notes beats traditional EHR data for early disease prediction.
Finally, a framework that unifies dynamic graph models, topological learning, and multimodal fusion to decompose health risk into interpretable components.
RL agents can learn to control complex fluid dynamics 40% faster by pretraining on Koopman-based surrogate models and iteratively refining them with policy-aware data.
Reconstructing high-resolution turbulence from extremely coarse data is now possible with SIMR-NO, which not only beats existing methods in accuracy but also respects the underlying physics.
By baking in tumor physics, PhysNet doesn't just beat standard deep learning models on medical image classification, it also learns interpretable biophysical parameters of tumor growth.
Random weight initialization is a major source of instability in deep learning, especially for rare classes, but this work shows how to eliminate it entirely with structured orthogonal initialization.
Learning thermomechanical material properties just got easier: this new framework guarantees thermodynamic consistency without needing entropy data or enforcing complex convexity constraints.
Drifting models leapfrog diffusion models in MRI-to-CT synthesis, achieving state-of-the-art image quality with millisecond-level inference speeds.
Reinforcement learning turns a quantum sensor's biggest limitation—nonlinear Zeeman dynamics—into its greatest strength, boosting magnetic sensitivity beyond the standard quantum limit.
LLMs and Stable Diffusion aren't just cool tools; they're the twin pillars of a new era where AI agents can conduct "deep research" rivaling top human scientists.
LLMs can now construct high-fidelity, disease-specific knowledge graphs from full-text biomedical literature, unlocking evidence-aware reasoning and hypothesis generation.
Ditch the batteries: buildings can slash emissions by intelligently using their own thermal mass to store excess solar energy.
PReD leaps ahead by creating the first foundation model to close the loop on perception, recognition, and decision-making for electromagnetic signals.
Quantum circuits can match classical MLPs on EEG classification tasks while using 50x fewer parameters, thanks to differentiable quantum architecture search that automatically optimizes circuit topology.
Scientific figure QA models are often fooled by the answer choices themselves, but a simple decoding strategy that contrasts image-grounded scores with text-only scores can significantly improve accuracy.
Autonomous architecture search for molecular transformers is surprisingly fruitless: you're better off just tuning learning rates.
End-to-end recognition of complex chemical structures from documents is now possible, thanks to a new model and dataset that leapfrog existing methods.
Curriculum learning can significantly boost myocardial scar segmentation accuracy, especially in challenging cases with minimal or diffuse scarring, by strategically guiding the model from easy to hard examples.
A new synthetic hyperspectral dataset lets researchers train and benchmark vegetation trait retrieval models with paired hyperspectral imagery and ground truth, all while controlling for environmental variability.
DINOv3, a vision foundation model trained on general images, surprisingly excels at dental image analysis, especially for the notoriously difficult task of intraoral image understanding.
SAM's impressive zero-shot segmentation abilities don't directly translate to medical imaging, but this new fine-tuning approach unlocks its potential for accurate nuclei instance segmentation with minimal added parameters.
Forget CPUs and GPUs: MCPT-Solver uses spintronics and Bayesian inference to create a hardware random number generator that dramatically accelerates Monte Carlo particle transport simulations.
Achieve kilometer-scale regional weather forecasts that significantly outperform operational NWP and AI baselines by intelligently coupling global and regional models.
Medical AI Scientist leapfrogs generic LLMs in clinical research, generating higher-quality, evidence-backed hypotheses and manuscripts that rival top-tier medical publications.
Unlock the secrets of scientific writing: EarlySciRev reveals how scientists *really* revise their work, offering a goldmine of early-stage revisions previously hidden in LaTeX comments.
LLMs can sift through routine clinical notes to detect epilepsy with high accuracy, even boosting expert neurologists' diagnostic performance by over 10%.
Training on grounded reasoning traces doesn't just improve hypothesis generation—it makes models 100% structurally compliant and boosts spark cosine similarity by nearly 3x.
LLMs can now diagnose spleen-stomach disorders by integrating both traditional Chinese and Western medicine, achieving state-of-the-art results.
Quantum computers could crack some cryptocurrency security in minutes, not years, unless the community acts now.
Standardized testbeds and effectiveness metrics could accelerate the development and validation of AI-assisted robotic thrombectomy, potentially revolutionizing stroke treatment accessibility.
Achieve strong, controllable privacy in federated biomedical AI without sacrificing performance, thanks to a lightweight key-embedded implicit neural representation.
Analog circuit designs can now be generated 1000x faster thanks to a novel approach that combines learned generators with SPICE-based ranking and a fix for REINFORCE's cross-topology reward distribution mismatch.
Orthogonal geometries, long thought optimal for spin-orbit coupling in donor-acceptor dyads, can actually *minimize* it, flipping our understanding of triplet excited state production.
Helium rain in gas giants may be less frequent than we thought, thanks to new simulations that significantly lower the estimated hydrogen-helium demixing temperatures.
Fermi's Golden Rule, a cornerstone of chemical physics, still holds surprises and warrants careful consideration of its assumptions for accurate application.
Achieve markedly improved excited-state calculations on NISQ devices by using a shallow QPE routine to filter spin contamination, avoiding costly explicit evaluations.
Unlock deeper insights into atomistic processes with this practical guide to path integral methods, distilling years of expert knowledge into a single resource.
Calculating double electron attachment energies for heavy elements just got a whole lot cheaper, thanks to a new relativistic EOM-CC method that slashes computational costs without sacrificing accuracy.
Quantum entanglement, not classical thermodynamics, decisively regulates organic crystal assembly, opening a new path to engineer organic semiconductor polymorphism.
Claims of quantum advantage in electronic structure calculations must now contend with DMRG benchmarks achieving CAS(89,102) on Fe$_5$S$_{12}$H$_4^{5-}$, pushing the boundaries of classical computation.
LLMs can diagnose better by explicitly reasoning about "what if" scenarios, just like doctors do in training.
LLMs may ace the test, but they're failing to think like us: a new benchmark reveals their struggle to simulate individual cognitive consistency across research domains and time.
Optimizing OpenFOAM with GPU ports and selective-memory techniques slashes energy consumption by 28% and iteration time by 72% compared to purely hardware-focused approaches.
Propagating mega-constellations is now 1500x faster thanks to a JAX-based SGP4 reimplementation, making large-scale collision avoidance tractable.
Hyperpolarizing the nuclear spin bath surrounding a molecular qubit can significantly extend its coherence time, offering a new knob for quantum control.
Water's density anomaly isn't just about mixed structures; it's a delicate balance of short-range order and intermediate-range collapse, revealed by a new machine-learned potential.
LLMs can't even reproduce published physics papers end-to-end, with the best model scoring only 34% on a new benchmark designed for this purpose.
Patient-level pretraining in computational pathology unlocks surprisingly transferable embeddings, outperforming slide-centric models while being 14x smaller than GigaPath.