Search papers, labs, and topics across Lattice.
4
0
8
12
Code-switching can degrade information retrieval performance by up to 27%, revealing a critical blind spot in current multilingual models.
VLMs already contain a rich latent space of aesthetic features that can be unlocked for personalized image ranking with just a linear readout, no fine-tuning needed.
Adapting Labovian narrative analysis to Japanese reveals the challenges and opportunities in cross-linguistic qualitative research, highlighting the need for language-specific guidelines.
NeuronMoE slashes multilingual LLM parameter counts by 40% without sacrificing performance, by cleverly allocating experts based on neuron-level language specialization rather than blunt layer-level assignments.