Search papers, labs, and topics across Lattice.
The paper introduces Command A, a large language model designed for enterprise applications, featuring agent optimization, multilingual support (23 languages), and a hybrid architecture. The model leverages a decentralized training approach with self-refinement and model merging to achieve strong RAG capabilities, grounding, and tool use for automating business processes. Evaluations across enterprise tasks and public benchmarks demonstrate excellent performance and efficiency, with weights released for research.
Command A shows how to build an enterprise-grade LLM that balances performance, efficiency, and multilingual capabilities using decentralized training and model merging.
In this report we describe the development of Command A, a powerful large language model purpose-built to excel at real-world enterprise use cases. Command A is an agent-optimised and multilingual-capable model, with support for 23 languages of global business, and a novel hybrid architecture balancing efficiency with top of the range performance. It offers best-in-class Retrieval Augmented Generation (RAG) capabilities with grounding and tool use to automate sophisticated business processes. These abilities are achieved through a decentralised training approach, including self-refinement algorithms and model merging techniques. We also include results for Command R7B which shares capability and architectural similarities to Command A. Weights for both models have been released for research purposes. This technical report details our original training pipeline and presents an extensive evaluation of our models across a suite of enterprise-relevant tasks and public benchmarks, demonstrating excellent performance and efficiency.