Search papers, labs, and topics across Lattice.
Renmin University of China
12
1
14
4
Optimal Transport offers a surprisingly effective and theoretically grounded approach to preference learning, outperforming existing methods in aligning LLMs with human values and reasoning abilities.
LLMs, like humans, exhibit a "frequency bias," performing better when prompted and fine-tuned with more common textual expressions.
By mixing flows and using a teacher-student approach, MMAE learns more robust representations of encrypted traffic, achieving state-of-the-art classification performance.
Encrypted traffic classification gets a major upgrade: TrafficMoE's dynamic, context-aware approach outperforms static methods by disentangling headers/payloads and filtering noise.
Wafer-scale SRAM CIM can deliver up to 17x better energy efficiency for LLM inference by eliminating off-chip data movement and using token-grained pipelining.
Forget static prompts: this method dynamically adjusts persona influence during decoding, boosting role-playing agent realism without costly fine-tuning.
An 80B model that runs like a 3B? Qwen3-Coder-Next shows you can get competitive coding agent performance with a fraction of the active parameters, thanks to smart training.
Achieve geometrically consistent and explorable 3D scene generation from a single image by reformulating the problem as multi-view stereo matching on anchor views projected from a panorama.
Achieve high-quality semantic segmentation on low-quality images by injecting segmentation priors into image restoration, outperforming existing methods that focus solely on pixel-level fidelity.
Achieve content-consistent image edits without sacrificing quality by using region-regularized reinforcement learning that preserves unedited regions.
Achieve scalable and consistent multi-reference image editing by dynamically serializing reference images into a coherent latent sequence, outperforming existing diffusion-based methods.
LLMs can solve competitive coding problems much more reliably by actively searching for the *right* test cases, rather than relying on random or pre-defined inputs.