Search papers, labs, and topics across Lattice.
21
30
1
DendroNNs offer a 4x energy efficiency boost over existing neuromorphic hardware by mimicking dendritic computation and training via a gradient-free rewiring mechanism.
Get 6x the RLHF alignment for your LLM with a new active learning pipeline that focuses on annotating the most informative response pairs.
Clever reticle placement on wafer-scale systems can boost throughput by 2.5x and slash latency by over a third, offering a hardware-level speedup for LLM training.
Finally, a virtual try-on system that actually works: Gaussian Wardrobe lets you swap clothes between 3D avatars with high-fidelity garment dynamics by learning shape-agnostic garment layers.
LLMs can follow detailed code refactoring instructions, but still fall short of mimicking human refactoring choices in real-world codebases, highlighting a critical gap in their ability to autonomously improve code quality.
Ditch the memory banks and prototype comparisons: this method learns a compact, parametric model of normal image embeddings with an autoregressive CNN, slashing inference time and memory in unsupervised anomaly detection.
Reasoning can boost LLM opinion alignment, but it's not a silver bullet for removing bias in political digital twins.
Forget computationally expensive fluid dynamics: this work shows that a simple, stateless model, carefully calibrated to real-world data, can create surprisingly effective digital twins for soft underwater robots.
LLM benchmark translations can be dramatically improved by test-time compute scaling, revealing a surprisingly cheap way to get more reliable multilingual evaluations.
Forget solo Git tutorials—GitAcademy's split-screen view, mirroring a partner's actions in real-time, makes learning collaborative workflows feel less like a lonely commit and more like a team sport.
Unlock domain generalization with unlabeled data by exploiting the structure of anti-causal relationships, where outcomes cause covariates.
Existing deforestation monitoring maps misclassify smallholder agroforestry as "forest," risking unfair penalties under regulations like the EUDR.
E-graphs, typically confined to isolated optimization steps, can now persist as a first-class citizen within the compiler's intermediate representation, unlocking broader and more flexible program optimization.
Achieve state-of-the-art depth completion by adapting 3D foundation models at test time with minimal parameter updates, outperforming task-specific encoders that often overfit.
Context files like AGENTS.md, intended to guide coding agents, often *hurt* performance and increase costs, challenging the common practice of using them.
An interpretable deep learning model, ECG-XPLAIM, rivals ResNet in arrhythmia detection sensitivity while offering crucial insights into its decision-making process via Grad-CAM.
Multimodal LLMs often perform worse with more modalities because they struggle to jointly recognize and reason across modalities, a problem solvable with simple prompting strategies.
A new deep learning model slashes the error rate for BMI estimation from smartphone photos, opening the door to more accessible and convenient health assessments.
Automating CAD design from text prompts is now feasible, with visual feedback loops boosting performance, especially for multimodal LLMs.
Achieve faster, near-optimal path planning in complex 3D environments by combining any-angle search with multi-resolution grids, outperforming even sampling-based methods.
LLMs that excel at math don't necessarily make good math tutors, revealing a surprising trade-off between subject matter expertise and pedagogical skill.