Search papers, labs, and topics across Lattice.
Ant Group
6
0
12
3
LLMs, when combined with efficient indexing, can extract actionable incidents from just a handful of noisy user descriptions in real-time, enabling rapid anomaly detection in large-scale cloud services.
Forget scaling model size – QuitoBench reveals that simply scaling training data delivers bigger gains for time series forecasting, across both deep learning and foundation models.
The complex JS-Wasm boundary is fertile ground for new vulnerabilities, and Weaver is the first fuzzer to effectively till it.
Multilingual embeddings just got a whole lot smaller and faster, with F2LLM-v2 models outperforming larger counterparts while supporting over 200 languages.
LLMs can be directly used as graph kernels for text-rich graphs, enabling message passing on raw text and outperforming methods that rely on static embeddings.
Noisy issue descriptions holding back your software agent? SWE-Fuse unlocks 60% higher solve rates by fusing issue-guided and issue-free training trajectories.