Search papers, labs, and topics across Lattice.
GenericAgent (GA) is introduced as a self-evolving LLM agent designed to maximize decision-relevant information within a limited context budget. GA achieves this through a minimal toolset, hierarchical on-demand memory, self-evolution via reusable SOPs and code, and context truncation/compression. Experiments show GA outperforms existing agent systems in task completion, tool use, and memory effectiveness while using fewer tokens and continuously improving.
LLM agent performance hinges on maximizing decision-relevant information density within context, not just context length, and GenericAgent proves it.
Long-horizon large language model (LLM) agents are fundamentally limited by context. As interactions become longer, tool descriptions, retrieved memories, and raw environmental feedback accumulate and push out the information needed for decision-making. At the same time, useful experience gained from tasks is often lost across episodes. We argue that long-horizon performance is determined not by context length, but by how much decision-relevant information is maintained within a finite context budget. We present GenericAgent (GA), a general-purpose, self-evolving LLM agent system built around a single principle: context information density maximization. GA implements this through four closely connected components: a minimal atomic tool set that keeps the interface simple, a hierarchical on-demand memory that only shows a small high-level view by default, a self-evolution mechanism that turns verified past trajectories into reusable SOPs and executable code, and a context truncation and compression layer that maintains information density during long executions. Across task completion, tool use efficiency, memory effectiveness, self-evolution, and web browsing, GA consistently outperforms leading agent systems while using significantly fewer tokens and interactions, and it continues to evolve over time. Project: https://github.com/lsdefine/GenericAgent