Logotipo do repositório
 

Publicação:
Experience generalization for multi-agent reinforcement learning

dc.contributor.authorPegoraro, Renê [UNESP]
dc.contributor.authorCosta, AHR
dc.contributor.authorRibeiro, CHC
dc.contributor.institutionUniversidade Estadual Paulista (Unesp)
dc.date.accessioned2014-05-20T13:25:56Z
dc.date.available2014-05-20T13:25:56Z
dc.date.issued2001-01-01
dc.description.abstractOn-line learning methods have been applied successfully in multi-agent systems to achieve coordination among agents. Learning in multi-agent systems implies in a non-stationary scenario perceived by the agents, since the behavior of other agents may change as they simultaneously learn how to improve their actions. Non-stationary scenarios can be modeled as Markov Games, which can be solved using the Minimax-Q algorithm a combination of Q-learning (a Reinforcement Learning (RL) algorithm which directly learns an optimal control policy) and the Minimax algorithm. However, finding optimal control policies using any RL algorithm (Q-learning and Minimax-Q included) can be very time consuming. Trying to improve the learning time of Q-learning, we considered the QS-algorithm. in which a single experience can update more than a single action value by using a spreading function. In this paper, we contribute a Minimax-QS algorithm which combines the Minimax-Q algorithm and the QS-algorithm. We conduct a series of empirical evaluation of the algorithm in a simplified simulator of the soccer domain. We show that even using a very simple domain-dependent spreading function, the performance of the learning algorithm can be improved.en
dc.description.affiliationUniv Estadual Paulista, Dept Computacao, BR-17033360 Bauru, SP, Brazil
dc.description.affiliationUnespUniv Estadual Paulista, Dept Computacao, BR-17033360 Bauru, SP, Brazil
dc.format.extent233-239
dc.identifierhttp://dx.doi.org/10.1109/SCCC.2001.972652
dc.identifier.citationSccc 2001: Xxi International Conference of the Chilean Computer Science Society, Proceedings. Los Alamitos: IEEE Computer Soc, p. 233-239, 2001.
dc.identifier.doi10.1109/SCCC.2001.972652
dc.identifier.lattes7114174203705251
dc.identifier.orcid0000-0003-0314-8660
dc.identifier.urihttp://hdl.handle.net/11449/8273
dc.identifier.wosWOS:000172674500027
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE), Computer Soc
dc.relation.ispartofSccc 2001: Xxi International Conference of the Chilean Computer Science Society, Proceedings
dc.rights.accessRightsAcesso aberto
dc.sourceWeb of Science
dc.titleExperience generalization for multi-agent reinforcement learningen
dc.typeTrabalho apresentado em evento
dcterms.licensehttp://www.ieee.org/publications_standards/publications/rights/rights_policies.html
dcterms.rightsHolderIEEE Computer Soc
dspace.entity.typePublication
unesp.author.lattes7114174203705251[1]
unesp.author.orcid0000-0003-0314-8660[1]
unesp.campusUniversidade Estadual Paulista (UNESP), Faculdade de Ciências, Baurupt
unesp.departmentComputação - FCpt

Arquivos

Licença do Pacote

Agora exibindo 1 - 1 de 1
Carregando...
Imagem de Miniatura
Nome:
license.txt
Tamanho:
1.71 KB
Formato:
Item-specific license agreed upon to submission
Descrição: