Search papers, labs, and topics across Lattice.
Sapiens2 is a new family of high-resolution transformer models for human-centric vision, ranging from 0.4 to 5B parameters and supporting up to 4K resolution. The models are pretrained using a combination of masked image reconstruction and self-distilled contrastive objectives on a curated dataset of 1B high-quality human images. Sapiens2 achieves state-of-the-art results on a range of tasks including pose estimation, body-part segmentation, and normal estimation, while also extending to new tasks like pointmap and albedo estimation.
Sapiens2's leap in human-centric vision proves that scaling up high-resolution transformers with a unified pretraining objective unlocks unprecedented fidelity and generalization.
We present Sapiens2, a model family of high-resolution transformers for human-centric vision focused on generalization, versatility, and high-fidelity outputs. Our model sizes range from 0.4 to 5 billion parameters, with native 1K resolution and hierarchical variants that support 4K. Sapiens2 substantially improves over its predecessor in both pretraining and post-training. First, to learn features that capture low-level details (for dense prediction) and high-level semantics (for zero-shot or few-label settings), we combine masked image reconstruction with self-distilled contrastive objectives. Our evaluations show that this unified pretraining objective is better suited for a wider range of downstream tasks. Second, along the data axis, we pretrain on a curated dataset of 1 billion high-quality human images and improve the quality and quantity of task annotations. Third, architecturally, we incorporate advances from frontier models that enable longer training schedules with improved stability. Our 4K models adopt windowed attention to reason over longer spatial context and are pretrained with 2K output resolution. Sapiens2 sets a new state-of-the-art and improves over the first generation on pose (+4 mAP), body-part segmentation (+24.3 mIoU), normal estimation (45.6% lower angular error) and extends to new tasks such as pointmap and albedo estimation. Code: https://github.com/facebookresearch/sapiens2