Search papers, labs, and topics across Lattice.
8 papers from Berkeley AI Research (BAIR) on Natural Language Processing
Reading Activity Traces (RATs) reveal the hidden creative work lost when algorithms automate interpretation, offering a path to design AI that preserves human insight.
Most social media platforms govern AI-generated content by simply applying existing content moderation policies, leaving key issues like ownership and monetization largely unaddressed.
Advisor performance paradoxically suffers most when personal AI is used moderately, highlighting the complex strategic interactions introduced by personal AI assistants.
Achieve 13-15% more efficient LLM watermark detection by using e-values for anytime-valid inference, enabling early stopping without sacrificing statistical guarantees.
Language models organize concepts like months and years into surprisingly clean geometric structures because of hidden symmetries in language statistics, even when those statistics are heavily perturbed.
Denoising diffusion models can significantly outperform discriminative methods in learning-to-rank, suggesting a new path for improving information retrieval.
LLMs evaluating job candidates exhibit significant bias against hedging language, docking candidates by 25.6% on average, even when the content is equivalent.
An LLM can analyze patient records like a clinician, predicting HIV care disengagement with clinically relevant justifications, potentially revolutionizing resource allocation and patient outcomes in sub-Saharan Africa.