Logo do repositório
 

Reinforcing learning in Deep Belief Networks through nature-inspired optimization

Carregando...
Imagem de Miniatura

Orientador

Coorientador

Pós-graduação

Curso de graduação

Título da Revista

ISSN da Revista

Título de Volume

Editor

Tipo

Artigo

Direito de acesso

Resumo

Deep learning techniques usually face drawbacks related to the vanishing gradient problem, i.e., the gradient becomes gradually weaker when propagating from one layer to another until it finally vanishes away and no longer helps in the learning process. Works have addressed this problem by introducing residual connections, thus assisting gradient propagation. However, such a subject of study has been poorly considered for Deep Belief Networks. In this paper, we propose a weighted layer-wise information reinforcement approach concerning Deep Belief Networks. Moreover, we also introduce metaheuristic optimization to select proper weight connections that improve the network's learning capabilities. Experiments conducted over public datasets corroborate the effectiveness of the proposed approach in image classification tasks.

Descrição

Palavras-chave

Deep Belief Network, Metaheuristic optimization, Optimization, Residual networks, Restricted Boltzmann machines

Idioma

Inglês

Citação

Applied Soft Computing, v. 108.

Itens relacionados

Unidades

Unidade
Faculdade de Ciências
FC
Campus: Bauru


Cursos de graduação

Programas de pós-graduação