Search papers, labs, and topics across Lattice.
This paper introduces a method for unsupervised discovery of recurrent transition-structure concepts in text corpora by training a contrastive model to predict passage co-occurrences within documents. The model maps pre-trained embeddings into an association space, clustering passages based on their functional similarity rather than topical similarity. Results on a large corpus of Project Gutenberg texts demonstrate the discovery of interpretable clusters representing literary traditions, registers, and scene templates, and show that the learned structure generalizes to unseen novels.
Move over, topic models: this method discovers functional text categories like "courtroom cross-examination" and "lyrical meditation" by learning how text *does*, not just what it's *about*.
Embedding models group text by semantic content, what text is about. We show that temporal co-occurrence within texts discovers a different kind of structure: recurrent transition-structure concepts or what text does. We train a 29.4M-parameter contrastive model on 373 million co-occurrence pairs from 9,766 Project Gutenberg texts (24.96 million passages), mapping pre-trained embeddings into an association space where passages with similar transition structure cluster together. Under capacity constraint (42.75% accuracy), the model must compress across recurring patterns rather than memorise individual co-occurrences. Clustering at six granularities (k=50 to k=2,000) produces a multi-resolution concept map; from broad modes like"direct confrontation"and"lyrical meditation"to precise registers and scene templates like"sailor dialect"and"courtroom cross-examination."At k=100, clusters average 4,508 books each (of 9,766), confirming corpus-wide patterns. Direct comparison with embedding-similarity clustering shows that raw embeddings group by topic while association-space clusters group by function, register, and literary tradition. Unseen novels are assigned to existing clusters without retraining; the association model concentrates each novel into a selective subset of coherent clusters, while raw embedding assignment saturates nearly all clusters. Validation controls address positional, length, and book-concentration confounds. The method extends Predictive Associative Memory (PAM, arXiv:2602.11322) from episodic recall to concept formation: where PAM recalls specific associations, multi-epoch contrastive training under compression extracts structural patterns that transfer to unseen texts, the same framework producing qualitatively different behaviour in a different regime.