Unsupervised selective rank fusion for image retrieval tasks
Nenhuma Miniatura disponível
Valem, Lucas Pascotti [UNESP]
Pedronette, Daniel Carlos Guimarães [UNESP]
Título da Revista
ISSN da Revista
Título de Volume
Several visual features have been developed for content-based image retrieval in the last decades, including global, local and deep learning-based approaches. However, despite the huge advances in features development and mid-level representations, a single visual descriptor is often insufficient to achieve effective retrieval results in several scenarios. Mainly due to the diverse aspects involved in human visual perception, the combination of different features has been establishing as a relevant trend in image retrieval. An intrinsic difficulty consists in the task of selecting the features to combine, which is often supported by supervised learning approaches. Therefore, in the absence of labeled data, selecting features in an unsupervised way is a very challenging, although essential task. In this paper, an unsupervised framework is proposed to select and fuse visual features in order to improve the effectiveness of image retrieval tasks. The framework estimates the effectiveness and correlation among features through a rank-based analysis and uses a list of ranker pairs to determine the selected features combinations. High-effective retrieval results were achieved through a comprehensive experimental evaluation conducted on 5 public datasets, involving 41 different features and comparison with other methods. Relative gains up to +55% were obtained in relation to the highest effective isolated feature.
Content-based image retrieval, Correlation measure, Effectiveness estimation, Rank-aggregation, Unsupervised late fusion
Neurocomputing, v. 377, p. 182-199.