Marcilio Jr, Wilson E. [UNESP]Eler, Danilo M. [UNESP]IEEE2021-06-252021-06-252020-01-012020 33rd Sibgrapi Conference On Graphics, Patterns And Images (sibgrapi 2020). New York: Ieee, p. 340-347, 2020.1530-1834http://hdl.handle.net/11449/210335Explainability has become one of the most discussed topics in machine learning research in recent years, and although a lot of methodologies that try to provide explanations to black-box models have been proposed to address such an issue, little discussion has been made on the pre-processing steps involving the pipeline of development of machine learning solutions, such as feature selection. In this work, we evaluate a game-theoretic approach used to explain the output of any machine learning model, SHAP, as a feature selection mechanism. In the experiments, we show that besides being able to explain the decisions of a model, it achieves better results than three commonly used feature selection algorithms.340-347engFrom explanations to feature selection: assessing SHAP values as feature selection mechanismTrabalho apresentado em evento10.1109/SIBGRAPI51738.2020.00053WOS:000651203300045