Logotipo do repositório
 

Publicação:
Rank-based self-training for graph convolutional networks

dc.contributor.authorPedronette, Daniel Carlos Guimarães [UNESP]
dc.contributor.authorLatecki, Longin Jan
dc.contributor.institutionUniversidade Estadual Paulista (Unesp)
dc.contributor.institutionTemple University
dc.date.accessioned2021-06-25T10:46:04Z
dc.date.available2021-06-25T10:46:04Z
dc.date.issued2021-03-01
dc.description.abstractGraph Convolutional Networks (GCNs) have been established as a fundamental approach for representation learning on graphs, based on convolution operations on non-Euclidean domain, defined by graph-structured data. GCNs and variants have achieved state-of-the-art results on classification tasks, especially in semi-supervised learning scenarios. A central challenge in semi-supervised classification consists in how to exploit the maximum of useful information encoded in the unlabeled data. In this paper, we address this issue through a novel self-training approach for improving the accuracy of GCNs on semi-supervised classification tasks. A margin score is used through a rank-based model to identify the most confident sample predictions. Such predictions are exploited as an expanded labeled set in a second-stage training step. Our model is suitable for different GCN models. Moreover, we also propose a rank aggregation of labeled sets obtained by different GCN models. The experimental evaluation considers four GCN variations and traditional benchmarks extensively used in the literature. Significant accuracy gains were achieved for all evaluated models, reaching results comparable or superior to the state-of-the-art. The best results were achieved for rank aggregation self-training on combinations of the four GCN models.en
dc.description.affiliationDepartment of Statistics Applied Mathematics and Computing (DEMAC) São Paulo State University (UNESP)
dc.description.affiliationDepartment of Computer and Information Sciences Temple University
dc.description.affiliationUnespDepartment of Statistics Applied Mathematics and Computing (DEMAC) São Paulo State University (UNESP)
dc.description.sponsorshipMicrosoft Research
dc.description.sponsorshipFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.description.sponsorshipConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
dc.description.sponsorshipNational Science Foundation
dc.description.sponsorshipIdFAPESP: #2017/25908-6
dc.description.sponsorshipIdFAPESP: #2018/15597-6
dc.description.sponsorshipIdCNPq: #308194/2017-9
dc.description.sponsorshipIdNational Science Foundation: IIS-1814745
dc.identifierhttp://dx.doi.org/10.1016/j.ipm.2020.102443
dc.identifier.citationInformation Processing and Management, v. 58, n. 2, 2021.
dc.identifier.doi10.1016/j.ipm.2020.102443
dc.identifier.issn0306-4573
dc.identifier.scopus2-s2.0-85097135780
dc.identifier.urihttp://hdl.handle.net/11449/206925
dc.language.isoeng
dc.relation.ispartofInformation Processing and Management
dc.sourceScopus
dc.subjectGraph convolutional networks
dc.subjectRank model
dc.subjectSelf-training
dc.subjectSemi-supervised learning
dc.titleRank-based self-training for graph convolutional networksen
dc.typeArtigo
dspace.entity.typePublication
unesp.author.orcid0000-0002-2867-4838[1]
unesp.campusUniversidade Estadual Paulista (UNESP), Instituto de Geociências e Ciências Exatas, Rio Claropt
unesp.departmentEstatística, Matemática Aplicada e Computação - IGCEpt

Arquivos