Human action recognition in videos based on spatiotemporal features and bag-of-poses

Nenhuma Miniatura disponível

Data

2020-10-01

Autores

Varges da Silva, Murilo
Nilceu Marana, Aparecido [UNESP]

Título da Revista

ISSN da Revista

Título de Volume

Editor

Resumo

Currently, there is a large number of methods that use 2D poses to represent and recognize human action in videos. Most of these methods use information computed from raw 2D poses based on the straight line segments that form the body parts in a 2D pose model in order to extract features (e.g., angles and trajectories). In our work, we propose a new method of representing 2D poses. Instead of directly using the straight line segments, firstly, the 2D pose is converted to the parameter space in which each segment is mapped to a point. Then, from the parameter space, spatiotemporal features are extracted and encoded using a Bag-of-Poses approach, then used for human action recognition in the video. Experiments on two well-known public datasets, Weizmann and KTH, showed that the proposed method using 2D poses encoded in parameter space can improve the recognition rates, obtaining competitive accuracy rates compared to state-of-the-art methods.

Descrição

Palavras-chave

Bag-of-poses, Human action recognition, Spatiotemporal features, Surveillance systems, Video sequences

Como citar

Applied Soft Computing Journal, v. 95.

Coleções