Search papers, labs, and topics across Lattice.
The paper introduces NEXTPP, a dual-channel framework for marked temporal point processes that integrates discrete event mark encoding with continuous-time dynamics modeling using Neural ODEs. NEXTPP employs self-attention to encode discrete event marks and a Neural ODE to evolve a continuous-time latent state, fusing these representations via cross-attention for bidirectional interaction. Experiments on five real-world datasets demonstrate that NEXTPP outperforms existing state-of-the-art models in predicting irregularly spaced event sequences with discrete marks.
By explicitly modeling bidirectional interactions between discrete event types and continuous-time dynamics, NEXTPP achieves state-of-the-art performance in predicting marked temporal point processes.
Predicting irregularly spaced event sequences with discrete marks poses significant challenges due to the complex, asynchronous dependencies embedded within continuous-time data streams.Existing sequential approaches capture dependencies among event tokens but ignore the continuous evolution between events, while Neural Ordinary Differential Equation (Neural ODE) methods model smooth dynamics yet fail to account for how event types influence future timing.To overcome these limitations, we propose NEXTPP, a dual-channel framework that unifies discrete and continuous representations via Event-granular Neural Evolution with Cross-Interaction for Marked Temporal Point Processes. Specifically, NEXTPP encodes discrete event marks via a self-attention mechanism, simultaneously evolving a latent continuous-time state using a Neural ODE. These parallel streams are then fused through a crossattention module to enable explicit bidirectional interaction between continuous and discrete representations. The fused representations drive the conditional intensity function of the neural Hawkes process, while an iterative thinning sampler is employed to generate future events. Extensive evaluations on five real-world datasets demonstrate that NEXTPP consistently outperforms state-of-the-art models. The source code can be found at https://github.com/AONE-NLP/NEXTPP.