Search papers, labs, and topics across Lattice.
This work was supported in part by the National Natural Science Foundation of China under Grant 62371310, in part by the Shenzhen Science and Technology Program (JCYJ20241202124415021), in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515011236. (Corresponding author: Xu Wang.)
4
0
6
Concept erasure in text-to-image models is mostly smoke and mirrors: a text-free attack can still regenerate "forgotten" concepts by exploiting the model's latent visual knowledge.
Achieve 45x compression of 3D Gaussian Splatting data while *improving* visual fidelity by over 10% with a streaming-friendly octree-based codec.
Stream 3D Gaussian Splatting scenes with higher visual quality and lower bandwidth by predicting user viewpoints and dynamically adapting bitrate using deep reinforcement learning.
By distilling visual foundation models, this work achieves a significant leap in event stream representation learning, surpassing prior methods in generalization, data efficiency, and transferability.