Search papers, labs, and topics across Lattice.
This paper introduces PRISM, a corpus-intrinsic method for initializing LDA topic models by deriving a Dirichlet prior from word co-occurrence statistics. Unlike methods relying on external knowledge, PRISM operates solely on the corpus itself, making it applicable in resource-constrained domains. Experiments on text and single-cell RNA-seq data demonstrate that PRISM improves topic coherence and interpretability compared to standard LDA and even rivals models using external knowledge.
Forget finetuning or embeddings: better topic models are lurking in your corpus's own co-occurrence stats.
Topic modeling seeks to uncover latent semantic structure in text, with LDA providing a foundational probabilistic framework. While recent methods often incorporate external knowledge (e.g., pre-trained embeddings), such reliance limits applicability in emerging or underexplored domains. We introduce \textbf{PRISM}, a corpus-intrinsic method that derives a Dirichlet parameter from word co-occurrence statistics to initialize LDA without altering its generative process. Experiments on text and single cell RNA-seq data show that PRISM improves topic coherence and interpretability, rivaling models that rely on external knowledge. These results underscore the value of corpus-driven initialization for topic modeling in resource-constrained settings. Code is available at: https://github.com/shaham-lab/PRISM.