Search papers, labs, and topics across Lattice.
The paper introduces "Markovian generation chains" to model iterative LLM inference, where the output of one generation step becomes the input for the next. Through experiments in iterative rephrasing and round-trip translation, they observe convergence to recurrent sets or continued novelty generation over a finite horizon. Using sentence-level Markov chain modeling, the authors demonstrate that sentence diversity in iterative LLM processes depends on factors like temperature and initial input.
Iteratively prompting LLMs can either collapse diversity or maintain novelty, revealing a sensitivity to temperature and initial conditions that has implications for multi-agent systems.
The widespread use of large language models (LLMs) raises an important question: how do texts evolve when they are repeatedly processed by LLMs? In this paper, we define this iterative inference process as Markovian generation chains, where each step takes a specific prompt template and the previous output as input, without including any prior memory. In iterative rephrasing and round-trip translation experiments, the output either converges to a small recurrent set or continues to produce novel sentences over a finite horizon. Through sentence-level Markov chain modeling and analysis of simulated data, we show that iterative process can either increase or reduce sentence diversity depending on factors such as the temperature parameter and the initial input sentence. These results offer valuable insights into the dynamics of iterative LLM inference and their implications for multi-agent LLM systems.