Publicação: MaxDropout: Deep neural network regularization based on maximum output values
dc.contributor.author | do Santos, Claudio Filipi Goncalves | |
dc.contributor.author | Colombo, Danilo | |
dc.contributor.author | Roder, Mateus [UNESP] | |
dc.contributor.author | Papa, João Paulo [UNESP] | |
dc.contributor.institution | Universidade Federal de São Carlos (UFSCar) | |
dc.contributor.institution | Petróleo Brasileiro - Petrobras | |
dc.contributor.institution | Universidade Estadual Paulista (UNESP) | |
dc.date.accessioned | 2022-05-01T06:02:36Z | |
dc.date.available | 2022-05-01T06:02:36Z | |
dc.date.issued | 2020-01-01 | |
dc.description.abstract | Different techniques have emerged in the deep learning scenario, such as Convolutional Neural Networks, Deep Belief Networks, and Long Short-Term Memory Networks, to cite a few. In lockstep, regularization methods, which aim to prevent overfitting by penalizing the weight connections, or turning off some units, have been widely studied either. In this paper, we present a novel approach called MaxDropout, a regularizer for deep neural network models that works in a supervised fashion by removing (shutting off) the prominent neurons (i.e., most active) in each hidden layer. The model forces fewer activated units to learn more representative information, thus providing sparsity. Regarding the experiments, we show that it is possible to improve existing neural networks and provide better results in neural networks when Dropout is replaced by MaxDropout. The proposed method was evaluated in image classification, achieving comparable results to existing regularizers, such as Cutout and RandomErasing, also improving the accuracy of neural networks that uses Dropout by replacing the existing layer by MaxDropout. | en |
dc.description.affiliation | Federal University of Sao Carlos - UFSCar | |
dc.description.affiliation | Petróleo Brasileiro - Petrobras | |
dc.description.affiliation | São Paulo State University - UNESP | |
dc.description.affiliationUnesp | São Paulo State University - UNESP | |
dc.description.sponsorship | Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) | |
dc.description.sponsorship | Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) | |
dc.description.sponsorshipId | FAPESP: #2013/07375-0 | |
dc.description.sponsorshipId | FAPESP: #2014/12236-1 | |
dc.description.sponsorshipId | FAPESP: #2017/25908-6 | |
dc.description.sponsorshipId | FAPESP: #2019/07825-1 | |
dc.description.sponsorshipId | CNPq: #307066/2017-7 | |
dc.description.sponsorshipId | CNPq: #427968/2018-6 | |
dc.format.extent | 2671-2676 | |
dc.identifier | http://dx.doi.org/10.1109/ICPR48806.2021.9412733 | |
dc.identifier.citation | Proceedings - International Conference on Pattern Recognition, p. 2671-2676. | |
dc.identifier.doi | 10.1109/ICPR48806.2021.9412733 | |
dc.identifier.issn | 1051-4651 | |
dc.identifier.scopus | 2-s2.0-85110459046 | |
dc.identifier.uri | http://hdl.handle.net/11449/233274 | |
dc.language.iso | eng | |
dc.relation.ispartof | Proceedings - International Conference on Pattern Recognition | |
dc.source | Scopus | |
dc.title | MaxDropout: Deep neural network regularization based on maximum output values | en |
dc.type | Trabalho apresentado em evento | |
dspace.entity.type | Publication | |
unesp.campus | Universidade Estadual Paulista (UNESP), Faculdade de Ciências, Bauru | pt |
unesp.department | Computação - FC | pt |