Logo do repositório
 

Influence of Pixel Perturbation on eXplainable Artificial Intelligence Methods

dc.contributor.authorda Feitosa, Juliana Costa [UNESP]
dc.contributor.authorRoder, Mateus [UNESP]
dc.contributor.authorPapa, João Paulo [UNESP]
dc.contributor.authorBrega, José Remo Ferreira [UNESP]
dc.contributor.institutionUniversidade Estadual Paulista (UNESP)
dc.date.accessioned2025-04-29T20:16:19Z
dc.date.issued2024-01-01
dc.description.abstractThe current scenario around Artificial Intelligence (AI) has demanded more and more transparent explanations about the existing models. The use of eXplicable Artificial Intelligence (XAI) has been considered as a solution in the search for explainability. As such, XAI methods can be used to verify the influence of adverse scenarios, such as pixel disturbance on AI models for segmentation. This paper presents the experiments performed with fish images of the Pacu species to determine the influence of pixel perturbation through the following explainable methods: Grad-CAM, Saliency Map, Layer Grad-CAM and CNN Filters. The perturbed pixels were considered the most important for the model during the segmentation process of the input image regions. From the existing pixel perturbation techniques, the images were subjected to three main techniques: white noise, color black noise and random noise. From the results obtained, it was observed that the Grad-CAM method had different behaviors for each perturbation technique tested, while the CNN Filters method showed more stability in the variation of the image averaging. The Saliency Map was the least sensitive to the three types of perturbation, as it required fewer iterations. Furthermore, of the perturbation techniques tested, Black noise showed the least ability to impact segmentation. Thus, it is concluded that the perturbation methods influence the outcome of the explainable models tested and interfere with these models in different ways. It is suggested that the experiments presented here be replicated on other AI models, on other explainability methods, and with other existing perturbation techniques to gather more evidence about this influence and from that, quantify which combination of XAI method and pixel perturbation is best for a given problem.en
dc.description.affiliationDepartmant of Computing School of Science Sao Paulo State University (UNESP)
dc.description.affiliationUnespDepartmant of Computing School of Science Sao Paulo State University (UNESP)
dc.format.extent624-631
dc.identifierhttp://dx.doi.org/10.5220/0012424800003660
dc.identifier.citationProceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, v. 3, p. 624-631.
dc.identifier.doi10.5220/0012424800003660
dc.identifier.issn2184-4321
dc.identifier.issn2184-5921
dc.identifier.scopus2-s2.0-85191355781
dc.identifier.urihttps://hdl.handle.net/11449/309700
dc.language.isoeng
dc.relation.ispartofProceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
dc.sourceScopus
dc.subjectArtificial Intelligence
dc.subjecteXplainable Artificial Intelligence
dc.subjectPixel Pertubation
dc.titleInfluence of Pixel Perturbation on eXplainable Artificial Intelligence Methodsen
dc.typeTrabalho apresentado em eventopt
dspace.entity.typePublication
unesp.author.orcid0009-0005-6935-1022[1]
unesp.author.orcid0000-0002-3112-5290[2]
unesp.author.orcid0000-0002-6494-7514[3]
unesp.author.orcid0000-0002-2275-4722[4]

Arquivos

Coleções