Convolutional Neural Networks and Ensembles for Visually Impaired Aid
| dc.contributor.author | Breve, Fabricio [UNESP] | |
| dc.contributor.institution | Universidade Estadual Paulista (UNESP) | |
| dc.date.accessioned | 2025-04-29T20:10:04Z | |
| dc.date.issued | 2023-01-01 | |
| dc.description.abstract | Recent surveys show that smartphone-based computer vision tools for visually impaired individuals often rely on outdated computer vision algorithms. Deep-learning approaches have been explored, but many require high-end or specialized hardware that is not practical for users. Therefore, developing deep learning systems that can make inferences using only the smartphone is desirable. This paper presents a comprehensive study of 25 different convolutional neural network (CNN) architectures to tackle the challenge of identifying obstacles in images captured by a smartphone positioned at chest height for visually impaired individuals. A transfer learning approach is employed, with the CNN models initialized with weights pre-trained on the vast ImageNet dataset. The study employs k-fold cross-validation with k= 10 and five repetitions to ensure the robustness of the results. Various configurations are explored for each CNN architecture, including different optimizers (Adam and RMSprop), freezing or fine-tuning convolutional layer weights, and different learning rates for convolutional and dense layers. Moreover, CNN ensembles are investigated, where multiple instances of the same or different CNN architectures are combined to enhance the overall performance. The highest accuracy achieved by an individual CNN is 94.56 % using EfficientNetB4, surpassing the previous best result of 92.11 %. With the use of ensembles, the accuracy is further improved to 96.55 % using multiple instances of EfficientNetB4, EfficientNetB0, and MobileNet. Overall, the study contributes to the development of advanced deep-learning models that can enhance the mobility and independence of visually impaired individuals. | en |
| dc.description.affiliation | São Paulo State University, SP | |
| dc.description.affiliationUnesp | São Paulo State University, SP | |
| dc.description.sponsorship | Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) | |
| dc.description.sponsorshipId | FAPESP: 2016/05669-4 | |
| dc.format.extent | 520-534 | |
| dc.identifier | http://dx.doi.org/10.1007/978-3-031-36805-9_34 | |
| dc.identifier.citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v. 13956 LNCS, p. 520-534. | |
| dc.identifier.doi | 10.1007/978-3-031-36805-9_34 | |
| dc.identifier.issn | 1611-3349 | |
| dc.identifier.issn | 0302-9743 | |
| dc.identifier.scopus | 2-s2.0-85164944784 | |
| dc.identifier.uri | https://hdl.handle.net/11449/307675 | |
| dc.language.iso | eng | |
| dc.relation.ispartof | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | |
| dc.source | Scopus | |
| dc.subject | Computer Vision | |
| dc.subject | Convolutional Neural Networks | |
| dc.subject | Deep Learning | |
| dc.subject | Visually Impaired Aid | |
| dc.title | Convolutional Neural Networks and Ensembles for Visually Impaired Aid | en |
| dc.type | Trabalho apresentado em evento | pt |
| dspace.entity.type | Publication | |
| unesp.author.orcid | 0000-0002-1123-9784[1] |

