Search papers, labs, and topics across Lattice.
Shanghai Jiao Tong University
3
0
6
6
LLMs, when combined with efficient indexing and noise reduction, can extract actionable insights from noisy customer incident data with high accuracy and low latency at enterprise scale.
MLLMs can now efficiently process 10K-frame videos without training, by adaptively selecting tokens based on the model's own uncertainty about the content.
Forget scaling model size – QuitoBench reveals that simply scaling training data delivers bigger gains for time series forecasting, across both deep learning and foundation models.