Search papers, labs, and topics across Lattice.
44 papers published across 3 labs.
Forget static retrieval: FlowPIE's flow-guided literature exploration and evolutionary idea generation unlocks more novel, feasible, and diverse scientific ideas.
Stop rewarding all LLM-generated candidates equally: ShapE-GRPO uses Shapley values to fairly distribute credit within sets, leading to better training and faster convergence.
LLMs can steer narrative extraction to align with user-specified perspectives, achieving a 9.9% improvement in agenda alignment over keyword matching without sacrificing narrative coherence.
A greedy heuristic can nearly match the performance of a computationally expensive integer program for blood donor scheduling, offering a practical solution for real-world blood donation centers.
Robots can now generalize to unseen objects and categories for manipulation tasks with only a few training examples, thanks to a novel retrieval-augmented affordance prediction framework.
Forget static retrieval: FlowPIE's flow-guided literature exploration and evolutionary idea generation unlocks more novel, feasible, and diverse scientific ideas.
Stop rewarding all LLM-generated candidates equally: ShapE-GRPO uses Shapley values to fairly distribute credit within sets, leading to better training and faster convergence.
LLMs can steer narrative extraction to align with user-specified perspectives, achieving a 9.9% improvement in agenda alignment over keyword matching without sacrificing narrative coherence.
A greedy heuristic can nearly match the performance of a computationally expensive integer program for blood donor scheduling, offering a practical solution for real-world blood donation centers.
Robots can now generalize to unseen objects and categories for manipulation tasks with only a few training examples, thanks to a novel retrieval-augmented affordance prediction framework.
News agencies reuse content across languages far more than simple lexical overlap reveals, with over half of articles drawing on foreign sources through paraphrase and compositional techniques.
Forget prompt engineering – Nomad autonomously uncovers insights you didn't even know to ask for.
Diffusion-based denoising can significantly improve composed image retrieval by making similarity scores more robust to hard negative samples.
Accurately predict how customers will react to price changes, even without controlled experiments, using a new Monodense neural network that beats traditional methods.
Knowing the context around a claim—gleaned from Wikipedia—can boost verifiable claim detection, but the benefit depends heavily on the domain and model used.
Forget SEO, optimizing content *structure* alone boosts citation rates in generative AI search engines by 17%.
Stochastic negative sampling in Direct Preference Optimization (DPO) dramatically improves multimodal sequential recommendation, suggesting that carefully curated "wrong" answers are key to preference learning.
Forget clunky prompt engineering: distilling user history into a learned preference memory boosts LLM-based product reranking by over 10%.
A human-in-the-loop AI assistant can provide scalable, high-quality coding education support in resource-constrained African contexts, even with limited infrastructure.
LLMs still struggle to accurately infer user interests from interaction histories, especially when dealing with diverse engagement signals – a critical gap for effective personalization.
Edge cameras can achieve a 45% improvement in cross-modal retrieval accuracy by ditching redundant frames and focusing only on what's new.
Querying satellite imagery just got easier: EarthEmbeddingExplorer lets you find images using text, visuals, or location, unlocking insights previously trapped in research papers.
Generative recommendation's touted cold-start abilities often vanish under rigorous testing, revealing a sensitivity to design choices that current benchmarks fail to capture.
Generative recommendation models can adapt to evolving user behavior without catastrophic forgetting by selectively updating item tokens based on a novel drift-detection mechanism.
Single-vector embeddings' retrieval failures aren't just about dimensionality; they're fundamentally hobbled by domain shift, relevance misalignment, and a "drowning" effect that multi-vector models handle far better.
Stop assuming a single utility function: modeling preferences as a mixture of archetypes unlocks better Bayesian optimization in complex, many-objective spaces.
Escape the confines of linear literature reviews: this multi-agent system surfaces hidden connections and ruptures in research landscapes, revealing insights that traditional methods miss.
Unconstrained bandit linear optimization can be surprisingly reduced to standard online linear optimization using a perturbation approach, unlocking new regret guarantees and high-probability bounds.
Retail AI's promise of intuitive, personalized experiences crumbles when confronted with the reality of differently abled users, exposing a systemic neglect of accessibility in design and deployment.
Sentence embeddings beat prompted LLMs at extracting API semantics from documentation, achieving >82% recall and >79% precision in data-flow and alias relation inference.
Achieve state-of-the-art image similarity generalization with a surprisingly simple, efficient, and interpretable model that operates on local descriptor correspondences.
Training remote sensing image-text retrieval models on real-world noisy data can be significantly improved by a self-paced learning strategy that mimics human cognitive learning patterns.
Injecting carefully-selected, reverse-ordered behavioral curricula into generative recommendation models can significantly boost conversion rates, as demonstrated by a 2% lift in online advertising revenue.
Courtroom-style debate with progressive evidence retrieval and role-switching boosts claim verification accuracy by 10%, suggesting structured deliberation can significantly reduce LLM unreliability.
A surprisingly simple retrieve-then-re-rank pipeline, enhanced with priority infilling and neighbor-aware re-ranking, achieves state-of-the-art results on the massive WikiKG90Mv2 knowledge graph.
Forget blindly retrieving the most relevant documents – RAG systems can achieve better reasoning by strategically seeking out the evidence that most reduces uncertainty about the answer.
Even with perfect bug localization, repository-level program repair fails more than half the time, revealing that better context and interface design are the next big levers to pull.
Stop treating software requirements as independent entities: modeling their interconnectedness via user feedback boosts prioritization performance.
LLM inference bottlenecks aren't just compute-bound: heterogeneous GPU-FPGA systems can slash memory processing overheads by up to 2x while simultaneously reducing energy consumption.
You can boost ranking model performance in low-traffic recommendation systems by directly distilling knowledge from a large-scale, but different, domain like video recommendations.
Why juggle separate retrieval and generation models when a single vision-language model can do both, cutting memory footprint by 41% without sacrificing generation quality?
Stop letting mismatched score distributions sink your multi-hop QA: calibrating vector and graph retrieval scores with percentile-rank normalization yields statistically significant gains.
A coordinated attack by just 1% of users can degrade recommendation quality by 20% in risk-controlling recommender systems, even with simple, algorithm-agnostic strategies.
Before sinking time into a new recommender model, know that this entropy-based estimator can predict its maximum achievable accuracy, revealing untapped potential or insurmountable data limitations.
Hallucinations in RAG are far more pervasive than we thought: re-annotating existing benchmarks reveals 1.68x more instances of unsupported claims, and a new framework, RT4CHART, dramatically improves detection.
Web agents can achieve 3x faster search and higher final accuracy by dynamically adapting their context management strategy based on the current state, rather than sticking to a single fixed approach.
VLMs can be backdoored to inject stealthy, context-aware advertisements triggered by natural user behaviors, and current defenses struggle to remove them without breaking the model.
Generative AI can drastically improve image retrieval accuracy for complex queries, outperforming contrastive learning methods by up to 93%.
By blending semantic similarity with graph-based traversal, GAAMA unlocks more effective long-term memory retrieval for conversational agents than standard RAG or memory compression techniques.