Search papers, labs, and topics across Lattice.
School of Integrated Circuits, Harbin Institute of Technology Shenzhen, Shenzhen, Guangdong, China
13
0
11
6
A single adaptive framework can boost the efficiency of zeroth-order optimization by up to 3x without increasing memory usage.
Finally, one-shot 3D head avatars can have realistic hair, thanks to decoupled modeling and physics-based simulation.
Randomly initialized neural nets can solve high-dimensional integro-differential equations like neutron transport faster and more stably than both physics-informed neural networks and traditional deterministic methods.
Smoke-GS lets you see through the haze, reconstructing 3D scenes from smoky images with surprising clarity by explicitly modeling view-dependent smoke appearance.
Reconstructing 3D scenes from images obscured by smoke and extreme darkness is now significantly more achievable, thanks to insights gleaned from the NTIRE 2026 challenge.
Achieve robust multimodal fusion even with missing modalities by ensuring the fusion head always receives a complete, fixed-size input via learned proxy tokens.
Forget fine-tuning: this method uses smart patch selection to adapt frozen LVLMs for deepfake detection, outperforming baselines without any training.
Injecting "beneficial noise" into cross-attention mechanisms can significantly improve unsupervised domain adaptation by forcing models to focus on content rather than style distractions.
Can AI transform a grumpy cat meme into a beacon of positivity while keeping the cat recognizable?
BrainSTR disentangles subtle disease signatures in dynamic brain networks by explicitly modeling spatio-temporal dependencies with contrastive learning, revealing interpretable biomarkers for neuropsychiatric disorders.
Forget monolithic adapters: a hierarchical "expert forest" leverages semantic relationships between tasks to achieve state-of-the-art performance in class-incremental learning.
Ditch the codebook: VP-VAE achieves stable VQ-VAE training by perturbing latent vectors instead of relying on explicit vector quantization.
Transformer-based visual trackers, thought to be robust, can be significantly disrupted by patch-targeted adversarial noise, requiring far fewer queries than previously thought.