Search papers, labs, and topics across Lattice.
This paper investigates whether Sparse Autoencoder (SAE) features in LLMs represent abstract meaning or are tied to specific scripts by analyzing Serbian digraphia, where text can be written in Latin or Cyrillic scripts with a near-perfect character mapping but distinct tokenization. By comparing SAE feature activations for identical sentences in both scripts across the Gemma model family (270M-27B), the study finds significant overlap in activated features, indicating script invariance. Furthermore, script changes cause less representational divergence than paraphrasing within the same script, suggesting a prioritization of meaning over orthography in SAE features, especially as model scale increases.
LLMs represent meaning more abstractly than previously thought: changing the script of a sentence (Latin vs. Cyrillic) causes less representational divergence than paraphrasing it within the same script.
Do the features learned by Sparse Autoencoders (SAEs) represent abstract meaning, or are they tied to how text is written? We investigate this question using Serbian digraphia as a controlled testbed: Serbian is written interchangeably in Latin and Cyrillic scripts with a near-perfect character mapping between them, enabling us to vary orthography while holding meaning exactly constant. Crucially, these scripts are tokenized completely differently, sharing no tokens whatsoever. Analyzing SAE feature activations across the Gemma model family (270M-27B parameters), we find that identical sentences in different Serbian scripts activate highly overlapping features, far exceeding random baselines. Strikingly, changing script causes less representational divergence than paraphrasing within the same script, suggesting SAE features prioritize meaning over orthographic form. Cross-script cross-paraphrase comparisons provide evidence against memorization, as these combinations rarely co-occur in training data yet still exhibit substantial feature overlap. This script invariance strengthens with model scale. Taken together, our findings suggest that SAE features can capture semantics at a level of abstraction above surface tokenization, and we propose Serbian digraphia as a general evaluation paradigm for probing the abstractness of learned representations.