Search papers, labs, and topics across Lattice.
BiCICLe is introduced as a novel framework that leverages In-Context Learning (ICL) to enable Large Language Models (LLMs) to perform few-shot bimanual robot manipulation without task-specific training. It addresses the challenges of high-dimensional action spaces and inter-arm coordination by framing the problem as a multi-agent leader-follower system with sequential, conditioned single-arm predictions. Experiments on the TWIN benchmark demonstrate that BiCICLe achieves up to 71.1% average success rate, outperforming existing training-free baselines and surpassing most supervised methods, while also exhibiting strong few-shot generalization.
Standard LLMs can now perform complex bimanual robot manipulation tasks with impressive success rates, all without any task-specific training.
Language Models (LLMs) have emerged as powerful reasoning engines for embodied control. In particular, In-Context Learning (ICL) enables off-the-shelf, text-only LLMs to predict robot actions without any task-specific training while preserving their generalization capabilities. Applying ICL to bimanual manipulation remains challenging, as the high-dimensional joint action space and tight inter-arm coordination constraints rapidly overwhelm standard context windows. To address this, we introduce BiCICLe (Bimanual Coordinated In-Context Learning), the first framework that enables standard LLMs to perform few-shot bimanual manipulation without fine-tuning. BiCICLe frames bimanual control as a multi-agent leader-follower problem, decoupling the action space into sequential, conditioned single-arm predictions. This naturally extends to Arms'Debate, an iterative refinement process, and to the introduction of a third LLM-as-Judge to evaluate and select the most plausible coordinated trajectories. Evaluated on 13 tasks from the TWIN benchmark, BiCICLe achieves up to 71.1% average success rate, outperforming the best training-free baseline by 6.7 percentage points and surpassing most supervised methods. We further demonstrate strong few-shot generalization on novel tasks.