Logo do repositório

Convolutional Neural Networks and Ensembles for Visually Impaired Aid

dc.contributor.authorBreve, Fabricio [UNESP]
dc.contributor.institutionUniversidade Estadual Paulista (UNESP)
dc.date.accessioned2025-04-29T20:10:04Z
dc.date.issued2023-01-01
dc.description.abstractRecent surveys show that smartphone-based computer vision tools for visually impaired individuals often rely on outdated computer vision algorithms. Deep-learning approaches have been explored, but many require high-end or specialized hardware that is not practical for users. Therefore, developing deep learning systems that can make inferences using only the smartphone is desirable. This paper presents a comprehensive study of 25 different convolutional neural network (CNN) architectures to tackle the challenge of identifying obstacles in images captured by a smartphone positioned at chest height for visually impaired individuals. A transfer learning approach is employed, with the CNN models initialized with weights pre-trained on the vast ImageNet dataset. The study employs k-fold cross-validation with k= 10 and five repetitions to ensure the robustness of the results. Various configurations are explored for each CNN architecture, including different optimizers (Adam and RMSprop), freezing or fine-tuning convolutional layer weights, and different learning rates for convolutional and dense layers. Moreover, CNN ensembles are investigated, where multiple instances of the same or different CNN architectures are combined to enhance the overall performance. The highest accuracy achieved by an individual CNN is 94.56 % using EfficientNetB4, surpassing the previous best result of 92.11 %. With the use of ensembles, the accuracy is further improved to 96.55 % using multiple instances of EfficientNetB4, EfficientNetB0, and MobileNet. Overall, the study contributes to the development of advanced deep-learning models that can enhance the mobility and independence of visually impaired individuals.en
dc.description.affiliationSão Paulo State University, SP
dc.description.affiliationUnespSão Paulo State University, SP
dc.description.sponsorshipFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.description.sponsorshipIdFAPESP: 2016/05669-4
dc.format.extent520-534
dc.identifierhttp://dx.doi.org/10.1007/978-3-031-36805-9_34
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v. 13956 LNCS, p. 520-534.
dc.identifier.doi10.1007/978-3-031-36805-9_34
dc.identifier.issn1611-3349
dc.identifier.issn0302-9743
dc.identifier.scopus2-s2.0-85164944784
dc.identifier.urihttps://hdl.handle.net/11449/307675
dc.language.isoeng
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
dc.sourceScopus
dc.subjectComputer Vision
dc.subjectConvolutional Neural Networks
dc.subjectDeep Learning
dc.subjectVisually Impaired Aid
dc.titleConvolutional Neural Networks and Ensembles for Visually Impaired Aiden
dc.typeTrabalho apresentado em eventopt
dspace.entity.typePublication
unesp.author.orcid0000-0002-1123-9784[1]

Arquivos

Coleções