Logo do repositório

Better trees: an empirical study on hyperparameter tuning of classification decision tree induction algorithms

dc.contributor.authorGomes Mantovani, Rafael
dc.contributor.authorHorváth, Tomáš
dc.contributor.authorRossi, André L. D. [UNESP]
dc.contributor.authorCerri, Ricardo
dc.contributor.authorBarbon Junior, Sylvio
dc.contributor.authorVanschoren, Joaquin
dc.contributor.authorCarvalho, André C. P. L. F. de
dc.contributor.institutionFederal University of Technology - Paraná (UTFPR)
dc.contributor.institutionPavol Jozef Šafárik University (UPJS)
dc.contributor.institutionELTE - Eötvös Loránd University
dc.contributor.institutionUniversidade Estadual Paulista (UNESP)
dc.contributor.institutionUniversidade Federal de São Carlos (UFSCar)
dc.contributor.institutionUniversity of Trieste (UniTS)
dc.contributor.institutionEindhoven University of Technology (TU/e)
dc.contributor.institutionUniversidade de São Paulo (USP)
dc.date.accessioned2025-04-29T18:05:31Z
dc.date.issued2024-05-01
dc.description.abstractMachine learning algorithms often contain many hyperparameters whose values affect the predictive performance of the induced models in intricate ways. Due to the high number of possibilities for these hyperparameter configurations and their complex interactions, it is common to use optimization techniques to find settings that lead to high predictive performance. However, insights into efficiently exploring this vast space of configurations and dealing with the trade-off between predictive and runtime performance remain challenging. Furthermore, there are cases where the default hyperparameters fit the suitable configuration. Additionally, for many reasons, including model validation and attendance to new legislation, there is an increasing interest in interpretable models, such as those created by the decision tree (DT) induction algorithms. This paper provides a comprehensive approach for investigating the effects of hyperparameter tuning for the two DT induction algorithms most often used, CART and C4.5. DT induction algorithms present high predictive performance and interpretable classification models, though many hyperparameters need to be adjusted. Experiments were carried out with different tuning strategies to induce models and to evaluate hyperparameters’ relevance using 94 classification datasets from OpenML. The experimental results point out that different hyperparameter profiles for the tuning of each algorithm provide statistically significant improvements in most of the datasets for CART, but only in one-third for C4.5. Although different algorithms may present different tuning scenarios, the tuning techniques generally required few evaluations to find accurate solutions. Furthermore, the best technique for all the algorithms was the Irace. Finally, we found out that tuning a specific small subset of hyperparameters is a good alternative for achieving optimal predictive performance.en
dc.description.affiliationFederal University of Technology - Paraná (UTFPR) Campus of Apucarana, PR
dc.description.affiliationFaculty of Science Institute of Computer Science Pavol Jozef Šafárik University (UPJS)
dc.description.affiliationFaculty of Informatics ELTE - Eötvös Loránd University
dc.description.affiliationSão Paulo State University (Unesp) Campus of Itapeva, SP
dc.description.affiliationDepartment of Computer Science Federal University of São Carlos (UFSCar), SP
dc.description.affiliationDepartment of Engineering and Architecture University of Trieste (UniTS)
dc.description.affiliationEindhoven University of Technology (TU/e)
dc.description.affiliationInstitute of Mathematics and Computer Sciences (ICMC) University of São Paulo (USP), SP
dc.description.affiliationUnespSão Paulo State University (Unesp) Campus of Itapeva, SP
dc.description.sponsorshipCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
dc.description.sponsorshipFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.description.sponsorshipConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
dc.description.sponsorshipIdFAPESP: 2012/23114-9
dc.description.sponsorshipIdFAPESP: 2015/03986-0
dc.description.sponsorshipIdCNPq: 409371/2021-1
dc.format.extent1364-1416
dc.identifierhttp://dx.doi.org/10.1007/s10618-024-01002-5
dc.identifier.citationData Mining and Knowledge Discovery, v. 38, n. 3, p. 1364-1416, 2024.
dc.identifier.doi10.1007/s10618-024-01002-5
dc.identifier.issn1573-756X
dc.identifier.issn1384-5810
dc.identifier.scopus2-s2.0-85183764370
dc.identifier.urihttps://hdl.handle.net/11449/297074
dc.language.isoeng
dc.relation.ispartofData Mining and Knowledge Discovery
dc.sourceScopus
dc.subjectCART
dc.subjectDecision tree induction algorithms
dc.subjectHyperparameter profile
dc.subjectHyperparameter tuning
dc.subjectJ48
dc.titleBetter trees: an empirical study on hyperparameter tuning of classification decision tree induction algorithmsen
dc.typeArtigopt
dspace.entity.typePublication
relation.isOrgUnitOfPublication60983e98-80f1-40b9-89b7-a00760584c8b
relation.isOrgUnitOfPublication.latestForDiscovery60983e98-80f1-40b9-89b7-a00760584c8b
unesp.campusUniversidade Estadual Paulista (UNESP), Instituto de Ciências e Engenharia, Itapevapt

Arquivos