Search papers, labs, and topics across Lattice.
7
0
8
0
Domain shifts and novel classes at test time can be tamed by nudging features back towards the source distribution, even for out-of-distribution examples.
Transformer-based architectures can now outperform CNNs in multi-view crowd tracking, especially in large, complex real-world scenes, thanks to a novel view-ground interaction mechanism.
Ditch the sparsity hyperparameter search: sparsemax attention in autoencoders automatically adapts neuron sparsity, boosting reconstruction and concept quality.
Style transfer can now capture the essence of artistic abstraction, not just surface-level appearance, by explicitly reinterpreting image structure.
LLM-based judges, widely used for automated evaluation, are riddled with diverse biases that can be significantly reduced through bias-aware training using RL and contrastive learning.
Cycle-consistent training unlocks robust layered image decomposition in diffusion models, even with complex interactions like shading and reflections.
By explicitly modeling specular effects with view-dependent opacity, this augmented Gaussian Splatting method leapfrogs NeRFs in rendering performance and parameter efficiency.