Search papers, labs, and topics across Lattice.
EXAONE 4.5, LG AI Research's first open-weight vision language model, integrates a dedicated visual encoder into the EXAONE 4.0 framework for native multimodal pretraining. Training emphasized document-centric corpora, leading to significant performance gains in document understanding and Korean contextual reasoning, while also improving general language capabilities. The model supports a 256K token context length, enabling long-context reasoning for enterprise applications.
LG's EXAONE 4.5 shows that strategically curating training data, particularly document-centric corpora, unlocks substantial gains in specialized tasks like document understanding and Korean contextual reasoning, even while maintaining competitive general performance.
This technical report introduces EXAONE 4.5, the first open-weight vision language model released by LG AI Research. EXAONE 4.5 is architected by integrating a dedicated visual encoder into the existing EXAONE 4.0 framework, enabling native multimodal pretraining over both visual and textual modalities. The model is trained on large-scale data with careful curation, particularly emphasizing document-centric corpora that align with LG's strategic application domains. This targeted data design enables substantial performance gains in document understanding and related tasks, while also delivering broad improvements across general language capabilities. EXAONE 4.5 extends context length up to 256K tokens, facilitating long-context reasoning and enterprise-scale use cases. Comparative evaluations demonstrate that EXAONE 4.5 achieves competitive performance in general benchmarks while outperforming state-of-the-art models of similar scale in document understanding and Korean contextual reasoning. As part of LG's ongoing effort toward practical industrial deployment, EXAONE 4.5 is designed to be continuously extended with additional domains and application scenarios to advance AI for a better life.