Search papers, labs, and topics across Lattice.
The paper introduces the Geologically-Informed Attention Transformer (GIAT) to improve lithology identification from well logs by incorporating geological priors into the Transformer's attention mechanism. GIAT uses Category-Wise Sequence Correlation (CSC) filters to generate a geologically-informed relational matrix, which is then injected into the self-attention calculation. Experiments on two datasets show that GIAT achieves state-of-the-art performance (up to 95.4% accuracy) and improved interpretability compared to existing models.
By injecting geological priors into the attention mechanism, GIAT achieves state-of-the-art lithology identification while also improving the interpretability of the model's predictions.
Accurate lithology identification from well logs is crucial for subsurface resource evaluation. Although Transformer-based models excel at sequence modeling, their"black-box"nature and lack of geological guidance limit their performance and trustworthiness. To overcome these limitations, this letter proposes the Geologically-Informed Attention Transformer (GIAT), a novel framework that deeply fuses data-driven geological priors with the Transformer's attention mechanism. The core of GIAT is a new attention-biasing mechanism. We repurpose Category-Wise Sequence Correlation (CSC) filters to generate a geologically-informed relational matrix, which is injected into the self-attention calculation to explicitly guide the model toward geologically coherent patterns. On two challenging datasets, GIAT achieves state-of-the-art performance with an accuracy of up to 95.4%, significantly outperforming existing models. More importantly, GIAT demonstrates exceptional interpretation faithfulness under input perturbations and generates geologically coherent predictions. Our work presents a new paradigm for building more accurate, reliable, and interpretable deep learning models for geoscience applications.