Search papers, labs, and topics across Lattice.
The paper introduces VidEoMT, a video segmentation model based on a Vision Transformer (ViT) encoder that eliminates the need for specialized tracking modules by using a lightweight query propagation mechanism. VidEoMT propagates information across frames by reusing and fusing queries from the previous frame with learned, temporally-agnostic queries, enabling temporal modeling in an encoder-only architecture. The model achieves competitive accuracy while being 5x-10x faster than existing methods, reaching up to 160 FPS with a ViT-L backbone.
Ditch the complex trackers: a plain ViT encoder, augmented with a clever query propagation trick, delivers state-of-the-art video segmentation at 10x the speed.
Existing online video segmentation models typically combine a per-frame segmenter with complex specialized tracking modules. While effective, these modules introduce significant architectural complexity and computational overhead. Recent studies suggest that plain Vision Transformer (ViT) encoders, when scaled with sufficient capacity and large-scale pre-training, can conduct accurate image segmentation without requiring specialized modules. Motivated by this observation, we propose the Video Encoder-only Mask Transformer (VidEoMT), a simple encoder-only video segmentation model that eliminates the need for dedicated tracking modules. To enable temporal modeling in an encoder-only ViT, VidEoMT introduces a lightweight query propagation mechanism that carries information across frames by reusing queries from the previous frame. To balance this with adaptability to new content, it employs a query fusion strategy that combines the propagated queries with a set of temporally-agnostic learned queries. As a result, VidEoMT attains the benefits of a tracker without added complexity, achieving competitive accuracy while being 5x--10x faster, running at up to 160 FPS with a ViT-L backbone. Code: https://www.tue-mps.org/videomt/