Fine-Tuning Dropout Regularization in Energy-Based Deep Learning

dc.contributor.authorde Rosa, Gustavo H. [UNESP]
dc.contributor.authorRoder, Mateus [UNESP]
dc.contributor.authorPapa, João P. [UNESP]
dc.contributor.institutionUniversidade Estadual Paulista (UNESP)
dc.date.accessioned2022-05-01T13:41:31Z
dc.date.available2022-05-01T13:41:31Z
dc.date.issued2021-01-01
dc.description.abstractDeep Learning architectures have been extensively studied in the last years, mainly due to their discriminative power in Computer Vision. However, one problem related to such models concerns their number of parameters and hyperparameters, which can easily reach hundreds of thousands. Additional drawbacks consist of their need for extensive training datasets and their high probability of overfitting. Recently, a naïve idea of disconnecting neurons from a network, known as Dropout, has shown to be a promising solution though it requires an adequate hyperparameter setting. Therefore, this work addresses finding suitable Dropout ratios through meta-heuristic optimization in the task of image reconstruction. Several energy-based Deep Learning architectures, such as Restricted Boltzmann Machines, Deep Belief Networks, and several meta-heuristic techniques, such as Particle Swarm Optimization, Bat Algorithm, Firefly Algorithm, Cuckoo Search, were employed in such a context. The experimental results describe the feasibility of using meta-heuristic optimization to find suitable Dropout parameters in three literature datasets and reinforce bio-inspired optimization as an alternative to empirically choosing regularization-based hyperparameters.en
dc.description.affiliationDepartment of Computing São Paulo State University
dc.description.affiliationUnespDepartment of Computing São Paulo State University
dc.format.extent99-108
dc.identifierhttp://dx.doi.org/10.1007/978-3-030-93420-0_10
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v. 12702 LNCS, p. 99-108.
dc.identifier.doi10.1007/978-3-030-93420-0_10
dc.identifier.issn1611-3349
dc.identifier.issn0302-9743
dc.identifier.scopus2-s2.0-85124256039
dc.identifier.urihttp://hdl.handle.net/11449/234115
dc.language.isoeng
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
dc.sourceScopus
dc.subjectDeep learning
dc.subjectMachine learning
dc.subjectMeta-heuristic optimization
dc.subjectRegularization methods
dc.titleFine-Tuning Dropout Regularization in Energy-Based Deep Learningen
dc.typeTrabalho apresentado em evento
unesp.author.orcid0000-0002-6442-8343[1]
unesp.author.orcid0000-0002-3112-5290[2]
unesp.author.orcid0000-0002-6494-7514[3]
unesp.campusUniversidade Estadual Paulista (Unesp), Faculdade de Ciências, Baurupt
unesp.departmentComputação - FCpt

Arquivos