Search papers, labs, and topics across Lattice.
City University of Hong Kong, Huawei Technologies Ltd
7
5
10
Memory dilution in LLMs is tackled head-on with a novel framework that not only preserves information but also amplifies reasoning capabilities.
LLMs get a reasoning boost by treating information extraction not as a one-off task, but as a dynamic cache that persists and filters information across multiple steps.
A unified benchmark reveals the trade-offs between pixel-wise accuracy and perceptual realism in state-of-the-art image super-resolution techniques.
Transformers' expressivity explodes combinatorially with sequence length, embedding dimension, and depth, reaching 螛(N^(d_model*L)) linear regions.
The landscape of deep learning optimizers is vast, but this paper cuts through the noise to reveal the fundamental trade-offs and promising future directions for efficient, robust, and trustworthy training.
Current image restoration models still fail to strike the right balance between noise reduction, detail fidelity, and accurate color in real-world, low-light portrait scenarios, highlighting a critical gap this challenge aims to close.
Current VLMs, despite excelling at general reasoning, still fail to accurately identify food and estimate nutrition, even when given multiple views and chain-of-thought prompting.