Search papers, labs, and topics across Lattice.
This paper introduces a diagnostic approach to Generative Engine Optimization (GEO) that focuses on improving citation rates in AI-generated responses, rather than just contribution. They develop a taxonomy of citation failure modes and an agentic system, AgentGEO, that diagnoses these failures and applies targeted repairs. AgentGEO achieves a 40% relative improvement in citation rates while modifying only 5% of content, outperforming baseline methods.
Stop blindly rewriting content: AgentGEO diagnoses *why* documents fail to be cited in AI responses, leading to a 40% boost in citations with minimal content changes.
Generative Engine Optimization (GEO) aims to improve content visibility in AI-generated responses. However, existing methods measure contribution-how much a document influences a response-rather than citation, the mechanism that actually drives traffic back to creators. Also, these methods apply generic rewriting rules uniformly, failing to diagnose why individual document are not cited. This paper introduces a diagnostic approach to GEO that asks why a document fails to be cited and intervenes accordingly. We develop a unified framework comprising: (1) the first taxonomy of citation failure modes spanning different stages of a citation pipeline; (2) AgentGEO, an agentic system that diagnoses failures using this taxonomy, selects targeted repairs from a corresponding tool library, and iterates until citation is achieved; and (3) a document-centric benchmark evaluating whether optimizations generalize across held-out queries. AgentGEO achieves over 40% relative improvement in citation rates while modifying only 5% of content, compared to 25% for baselines. Our analysis reveals that generic optimization can harm long-tail content and some documents face challenges that optimization alone cannot fully address-findings with implications for equitable visibility in AI-mediated information access.