Search papers, labs, and topics across Lattice.
8 papers from Amazon Science on Recommendation & Information Retrieval
LLM-generated survey responses can be statistically accurate yet still miss the option most preferred by humans, highlighting a critical flaw in current evaluation methods.
Agentic LLMs are surprisingly vulnerable: a new framework finds successful attacks in 84% of attempts by escalating prompt injection techniques across multiple stages.
Current machine unlearning methods for recommender systems struggle with robustness and sequential deletions, especially in attention-based and recurrent models, highlighting a critical gap ERASE helps to expose.
LLM-based recommender systems can trigger users' personal trauma, phobias, or self-harm history, but a new framework cuts these safety violations by 96.5% while maintaining recommendation quality.
Forget costly knowledge graphs: SAGE offers a lightweight, chunk-level graph expansion method that boosts retrieval recall by up to 8.5 points on heterogeneous QA tasks.
An end-to-end system extracts funny scenes from movies with 87% accuracy, opening new avenues for automated content repurposing.
Give new e-commerce products a warm start by borrowing behavioral signals from their substitutes, boosting search relevance and product discovery.
Stop hand-rolling your multi-task learning to rank models: DeepMTL2R provides a ready-to-use framework with 21 SOTA algorithms and Pareto-optimal optimization.