Oral Dysplasia Classification by Using Fractal Representation Images and Convolutional Neural Networks
Carregando...
Arquivos
Fontes externas
Fontes externas
Data
Orientador
Coorientador
Pós-graduação
Curso de graduação
Título da Revista
ISSN da Revista
Título de Volume
Editor
Tipo
Trabalho apresentado em evento
Direito de acesso
Arquivos
Fontes externas
Fontes externas
Resumo
Oral cavity lesions can be graded by specialists, a task that is both difficult and subjective. The challenges in defining patterns can lead to inconsistencies in the diagnosis, often due to the color variations on the histological images. The development of computational systems has emerged as an effective approach for aiding specialists in the diagnosis process, with color normalization techniques proving to enhance diagnostic accuracy. There remains an open challenge in understanding the impact of color normalization on the classification of histological tissues representing dysplasia groups. This study presents an approach to classify dysplasia lesions based on ensemble models, fractal representations, and convolutional neural networks (CNN). Additionally, this work evaluates the influence of color normalization in the preprocessing stage. The results obtained with the proposed methodology were analyzed with and without the preprocessing stage. This approach was applied in a dataset composed of 296 histological images categorized into healthy, mild, moderate, and severe oral epithelial dysplasia tissues. The proposed approaches based on the ensemble were evaluated with the cross-validation technique resulting in accuracy rates ranging from 96.1% to 98.5% with the nonnormalized dataset. This approach can be employed as a supplementary tool for clinical applications, aiding specialists in decision-making regarding lesion classification.
Descrição
Palavras-chave
Convolutional Neural Network, Dysplasia, Ensemble, Fractal Geometry, Histological Image, Reshape
Idioma
Inglês
Citação
Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, v. 3, p. 524-531.





