Randomized Autoencoder-based Representation for Dynamic Texture Recognition
Carregando...
Arquivos
Fonte externa
Fonte externa
Data
Autores
Orientador
Coorientador
Pós-graduação
Curso de graduação
Título da Revista
ISSN da Revista
Título de Volume
Editor
Tipo
Trabalho apresentado em evento
Direito de acesso
Arquivos
Fonte externa
Fonte externa
Resumo
This paper proposes a single-parameter spatio-temporal representation for dynamic texture recognition using statistical measures of the decoder's learned weights of Randomized Autoencoders (RAE) applied across three orthogonal planes. Firstly, for each orthogonal plane, a randomized autoencoder is applied to each frame to extract discriminating features. Following this, the decoder's learned weights for each frame are vertically concatenated, and statistical measures, namely average, standard deviation, and skewness, are applied to create three partial descriptors. Thus, our proposed spatio-temporal representation is constructed by replicating this procedure across each orthogonal plane XY, XT, and YT, and merging the partial feature descriptors, to capture both appearance and motion characteristics. The proposed representation was evaluated on four benchmarks to demonstrate its robustness and effectiveness on dynamic texture recognition, achieving high accuracies, on the UCLA-50, UCLA-9, UCLA-8, and DynTex++ benchmarks. Finally, the achieved results evidence a highly discriminating and robust dynamic texture descriptor using randomized autoencoders and statistical measures for weight summarization. This approach shows its potential and an important contribution to the field of dynamic texture analysis.
Descrição
Palavras-chave
Dynamic texture analysis, Randomized autoencoders, Representation learning
Idioma
Inglês
Citação
International Conference on Systems, Signals, and Image Processing.


