Search papers, labs, and topics across Lattice.
The paper introduces K2-V2, a fully open large language model (LLM) designed with a focus on reasoning adaptation, conversation, and knowledge retrieval. K2-V2 is claimed to outperform Qwen2.5-72B and approach the performance of Qwen3-235B, positioning it as a leading open-weight model in its size class. The model is trained with explicit infusion of domain knowledge, reasoning skills, long-context understanding, and tool use, and the authors release the full training history and data composition to facilitate continuous training.
K2-V2 redefines the open-source LLM landscape, rivaling closed-source giants in reasoning capabilities while providing full transparency into its training data and process.
We introduce K2-V2, a 360-open LLM built from scratch as a superior base for reasoning adaptation, in addition to functions such as conversation and knowledge retrieval from general LLMs. It stands as the strongest fully open model, rivals open-weight leaders in its size class, outperforms Qwen2.5-72B and approaches the performance of Qwen3-235B. We actively infuse domain knowledge, reasoning, long-context, and tool use throughout the training process. This explicitly prepares the model for complex reasoning tasks. We demonstrate this potential using simple supervised fine-tuning, establishing a strong baseline that indicates significant headroom for advanced alignment. By releasing the full training history and data composition, we maximize the effectiveness of continuous training, a key open source production scenario. We release the model weights and signature LLM360 artifacts, such as complete training data, to empower the community with a capable, reasoning-centric foundation.