Search papers, labs, and topics across Lattice.
The paper introduces XStreamVGGT, a method to reduce the memory footprint of StreamVGGT, a streaming 3D reconstruction transformer, by compressing the KV cache. XStreamVGGT employs a two-pronged approach: pruning redundant KVs from multi-view inputs based on token importance and quantizing the remaining KV tensors. Experiments demonstrate a 4.42x reduction in memory usage and a 5.48x speedup in inference with minimal performance degradation, making streaming 3D applications more scalable.
Squeeze 3D vision transformers: XStreamVGGT slashes memory consumption by 4.42x and accelerates inference by 5.48x via pruning and quantization of the KV cache, all with negligible performance loss.
Learning-based 3D visual geometry models have benefited substantially from large-scale transformers. Among these, StreamVGGT leverages frame-wise causal attention for strong streaming reconstruction, but suffers from unbounded KV cache growth, leading to escalating memory consumption and inference latency as input frames accumulate. We propose XStreamVGGT, a tuning-free approach that systematically compresses the KV cache through joint pruning and quantization, enabling extremely memory-efficient streaming inference. Specifically, redundant KVs originating from multi-view inputs are pruned through efficient token importance identification, enabling a fixed memory budget. Leveraging the unique distribution of KV tensors, we incorporate KV quantization to further reduce memory consumption. Extensive evaluations show that XStreamVGGT achieves mostly negligible performance degradation while substantially reducing memory usage by 4.42$\times$ and accelerating inference by 5.48$\times$, enabling scalable and practical streaming 3D applications. The code is available at https://github.com/ywh187/XStreamVGGT/.