Search papers, labs, and topics across Lattice.
Qwen3-Coder-Next, an 80B parameter language model with only 3B active parameters during inference, was developed and trained specifically for coding agent tasks. The model was trained using a large-scale synthesis of verifiable coding tasks with executable environments, enabling learning from environment feedback through mid-training and reinforcement learning. Results on agent-centric benchmarks like SWE-Bench and Terminal-Bench demonstrate competitive performance relative to its active parameter count, highlighting the potential of strong training recipes for models with small parameter footprints.
An 80B model that runs like a 3B? Qwen3-Coder-Next shows you can get competitive coding agent performance with a fraction of the active parameters, thanks to smart training.
We present Qwen3-Coder-Next, an open-weight language model specialized for coding agents. Qwen3-Coder-Next is an 80-billion-parameter model that activates only 3 billion parameters during inference, enabling strong coding capability with efficient inference. In this work, we explore how far strong training recipes can push the capability limits of models with small parameter footprints. To achieve this, we perform agentic training through large-scale synthesis of verifiable coding tasks paired with executable environments, allowing learning directly from environment feedback via mid-training and reinforcement learning. Across agent-centric benchmarks including SWE-Bench and Terminal-Bench, Qwen3-Coder-Next achieves competitive performance relative to its active parameter count. We release both base and instruction-tuned open-weight versions to support research and real-world coding agent development.