Search papers, labs, and topics across Lattice.
The paper introduces OmniGCD, a modality-agnostic approach to Generalized Category Discovery (GCD) that leverages modality-specific encoders and a synthetically trained Transformer to identify known and novel classes in a zero-shot setting. By creating a GCD latent space and transforming it with a Transformer, the method decouples representation learning from category discovery. Evaluated on 16 datasets across four modalities, OmniGCD demonstrates improved classification accuracy for both known and novel classes compared to baselines, achieving an average improvement of +6.2% to +17.9% depending on the modality.
Forget dataset-specific fine-tuning: a Transformer trained on synthetic data can unlock zero-shot generalized category discovery across vision, text, audio, and remote sensing modalities.
Generalized Category Discovery (GCD) challenges methods to identify known and novel classes using partially labeled data, mirroring human category learning. Unlike prior GCD methods, which operate within a single modality and require dataset-specific fine-tuning, we propose a modality-agnostic GCD approach inspired by the human brain's abstract category formation. Our $\textbf{OmniGCD}$ leverages modality-specific encoders (e.g., vision, audio, text, remote sensing) to process inputs, followed by dimension reduction to construct a $\textbf{GCD latent space}$, which is transformed at test-time into a representation better suited for clustering using a novel synthetically trained Transformer-based model. To evaluate OmniGCD, we introduce a $\textbf{zero-shot GCD setting}$ where no dataset-specific fine-tuning is allowed, enabling modality-agnostic category discovery. $\textbf{Trained once on synthetic data}$, OmniGCD performs zero-shot GCD across 16 datasets spanning four modalities, improving classification accuracy for known and novel classes over baselines (average percentage point improvement of $\textbf{+6.2}$, $\textbf{+17.9}$, $\textbf{+1.5}$ and $\textbf{+12.7}$ for vision, text, audio and remote sensing). This highlights the importance of strong encoders while decoupling representation learning from category discovery. Improving modality-agnostic methods will propagate across modalities, enabling encoder development independent of GCD. Our work serves as a benchmark for future modality-agnostic GCD works, paving the way for scalable, human-inspired category discovery. All code is available $\href{https://github.com/Jordan-HS/OmniGCD}{here}$