Search papers, labs, and topics across Lattice.
Shanghai Jiao Tong University
3
0
7
Untangling the mess of "streaming LLMs," this paper delivers a clear taxonomy that distinguishes between streaming generation, streaming inputs, and interactive architectures.
LVLMs can reason about video streams *much* faster and better by thinking concurrently with the incoming data, not in batches.
LLMs actually *do* improve time series forecasting, especially for cross-domain generalization, overturning prior doubts with a massive 8-billion observation study.