Search papers, labs, and topics across Lattice.
The paper introduces UniCreative, a reinforcement learning framework for creative writing that balances long-form coherence and short-form expressiveness. It uses an adaptive constraint-aware reward model (AC-GenRM) to provide dynamic, query-specific feedback and a policy optimization algorithm (ACPO) to align models with human preferences without relying on supervised data. Experiments show ACPO improves performance across writing tasks and exhibits an emergent ability to distinguish between tasks requiring planning versus direct generation.
Models can learn to self-differentiate between tasks requiring rigorous planning versus direct generation in creative writing, unlocking a new level of meta-cognitive ability.
A fundamental challenge in creative writing lies in reconciling the inherent tension between maintaining global coherence in long-form narratives and preserving local expressiveness in short-form texts. While long-context generation necessitates explicit macroscopic planning, short-form creativity often demands spontaneous, constraint-free expression. Existing alignment paradigms, however, typically employ static reward signals and rely heavily on high-quality supervised data, which is costly and difficult to scale. To address this, we propose \textbf{UniCreative}, a unified reference-free reinforcement learning framework. We first introduce \textbf{AC-GenRM}, an adaptive constraint-aware reward model that dynamically synthesizes query-specific criteria to provide fine-grained preference judgments. Leveraging these signals, we propose \textbf{ACPO}, a policy optimization algorithm that aligns models with human preferences across both content quality and structural paradigms without supervised fine-tuning and ground-truth references. Empirical results demonstrate that AC-GenRM aligns closely with expert evaluations, while ACPO significantly enhances performance across diverse writing tasks. Crucially, our analysis reveals an emergent meta-cognitive ability: the model learns to autonomously differentiate between tasks requiring rigorous planning and those favoring direct generation, validating the effectiveness of our direct alignment approach.