Search papers, labs, and topics across Lattice.
University of California San Diego
3
0
6
LLMs are revolutionizing conversational AI research, but understanding how to best leverage them for user simulation requires a new taxonomy and understanding of open challenges, as this survey reveals.
MLLMs can "think" with images, but their actions often don't match their reasoning, and this paper solves that with a new training method that forces them to explain what they see.
Stop hand-crafting hints for RL agents: HiLL learns to generate adaptive hints that actually improve the agent's performance on the original task, not just the hinted one.