Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Novel class discovery (NCD) aims to discover novel categories in an unlabeled dataset by employing a model that is trained on a labeled dataset with different but semantically related categories. The challenge of this task is that the model needs to learn discriminative representations from seen categories that can accurately group unseen categories. Existing methods typically pre-train models on seen data only containing limited semantic categories, resulting in the learned representation less discriminative for varied unseen categories that may be encountered in the future. In this paper, we propose a novel richer prior knowledge (RPK) module to learn diverse and discriminative representation for future novel categories by exposing the model to a large number of synthetic visual categories. Our insight is that the more categories the model has seen during pre-training, the less biased the learned representation space will be to the base categories. To demonstrate the effectiveness of our approach, we conduct extensive experiments on a variety of datasets and settings, which validates the effectiveness of our proposed method. Additionally, our approach can be easily integrated into other methods and achieves superior performance.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
To submit a manuscript, please go to https://jcvm.org.
Comments on this article