Search papers, labs, and topics across Lattice.
The paper introduces Synapse Consolidation (SyCo), a parameter-efficient test-time adaptation method for LLMs that draws inspiration from molecular signaling cascades to update low-rank adapters. SyCo uses Rac1 to confine plasticity to a tail-gradient subspace, preserving source knowledge, and MAPK to suppress noisy updates and consolidate useful adaptations. The method is evaluated in a new Multi-source Open-set Adaptation (MOA) setting, demonstrating superior performance compared to strong baselines in adapting to both unseen tasks and data shifts across 18 NLP datasets.
LLMs can adapt to new tasks and data distributions in the wild without catastrophic forgetting by mimicking how fruit flies consolidate memories.
Large Language Models (LLMs) generalize across tasks via reusable representations and flexible reasoning, yet remain brittle in real deployment under evolving tasks and continual distribution shift. A common approach is Test-Time Adaptation (TTA), existing ones of which updates models with hand-designed unsupervised objectives over the full parameter space and mostly overlook preserving shared source knowledge and the reliability of adaptation signals. Drawing on molecular signaling cascades of memory updating in Drosophila, we propose Synapse Consolidation (SyCo), a parameter-efficient LLM adaptation method that updates low-rank adapters through Rac1 and MAPK pathways under the guidance of a structured TTA objective driven by problem understanding, process understanding, and source-domain guardrail. Rac1 confines plasticity to a tail-gradient subspace that is less critical for source knowledge, enabling rapid specialization while preserving source representations. MAPK uses a tiered controller to suppress noisy updates and consolidate useful adaptations under non-stationary streams. To model real deployments with multiple sources and continually emerging tasks, we introduce Multi-source Open-set Adaptation (MOA) setting, where a model is trained on multiple labeled source tasks and then adapts on open, non-stationary unlabeled test streams that mix seen and unseen tasks with partial overlap in label and intent space. Across 18 NLP datasets and the MOA setting, SyCo consistently outperforms strong baselines, achieving 78.31\% on unseen-task adaptation and 85.37\% on unseen-data shifts.