Publication:
GPI-tree search: algorithms for decision-time planning with the general policy improvement theorem
| cris.virtual.department | #PLACEHOLDER_PARENT_METADATA_VALUE# | |
| cris.virtual.department | #PLACEHOLDER_PARENT_METADATA_VALUE# | |
| cris.virtual.department | #PLACEHOLDER_PARENT_METADATA_VALUE# | |
| cris.virtual.department | #PLACEHOLDER_PARENT_METADATA_VALUE# | |
| cris.virtual.department | #PLACEHOLDER_PARENT_METADATA_VALUE# | |
| cris.virtual.orcid | 0000-0002-9358-8565 | |
| cris.virtual.orcid | 0000-0003-0351-1714 | |
| cris.virtual.orcid | 0000-0001-6300-6993 | |
| cris.virtual.orcid | 0000-0002-4812-4841 | |
| cris.virtual.orcid | 0000-0002-2969-3133 | |
| cris.virtualsource.department | 0f9c7417-0dde-4b0c-9c8a-f692de0bbd8b | |
| cris.virtualsource.department | 0e177830-d028-449f-9e57-ea9fa8c7b866 | |
| cris.virtualsource.department | 1247af72-23b0-41ba-881e-6215b094158f | |
| cris.virtualsource.department | c51c977b-dc5a-451e-ac25-4b9f2b738719 | |
| cris.virtualsource.department | 5f457973-5b9f-4593-8a29-1eeb47f32775 | |
| cris.virtualsource.orcid | 0f9c7417-0dde-4b0c-9c8a-f692de0bbd8b | |
| cris.virtualsource.orcid | 0e177830-d028-449f-9e57-ea9fa8c7b866 | |
| cris.virtualsource.orcid | 1247af72-23b0-41ba-881e-6215b094158f | |
| cris.virtualsource.orcid | c51c977b-dc5a-451e-ac25-4b9f2b738719 | |
| cris.virtualsource.orcid | 5f457973-5b9f-4593-8a29-1eeb47f32775 | |
| dc.contributor.author | Bagot, Louis | |
| dc.contributor.author | D'eer, Lynn | |
| dc.contributor.author | Latre, Steven | |
| dc.contributor.author | De Schepper, Tom | |
| dc.contributor.author | Mets, Kevin | |
| dc.date.accessioned | 2026-01-08T13:17:49Z | |
| dc.date.available | 2026-01-08T13:17:49Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | In Reinforcement Learning, Unsupervised Skill Discovery tackles the learning of several policies for downstream task transfer. Once these skills are learnt, the question of how best to use and combine them remains an open problem. The General Policy Improvement Theorem (GPI) creates a policy stronger than any individual skill by selecting the highest-valued policy, generally evaluated with Successor Features. However, the GPI policy is unable to mix and combine the skills at decision time to formulate stronger plans. In this paper, we propose to adopt a model-based setting in order to make such planning possible, and formally show that a forward search improves on the GPI policy and any shallower searches under some approximation term. We argue for decision-time planning, and design a family of algorithms, GPI-Tree Search Algorithms, to use Monte Carlo Tree Search (MCTS) with GPI. These algorithms foster the skills and Q-value priors of the GPI framework to guide and improve the search, which we back up with visual intuition for the different design choices. Our experiments show that the resulting policies are much stronger than the GPI policy alone, even under approximation; they can also improve beyond the linear constraint of Successor Features. | |
| dc.identifier.doi | 10.1007/s00521-025-11304-4 | |
| dc.identifier.issn | 1433-3058 | |
| dc.identifier.uri | https://imec-publications.be/handle/20.500.12860/58627 | |
| dc.provenance.editstepuser | greet.vanhoof@imec.be | |
| dc.publisher | Springer | |
| dc.source.beginpage | 11404 | |
| dc.source.endpage | 11311 | |
| dc.source.issue | 23 | |
| dc.source.journal | Neural Computing and Applications | |
| dc.source.numberofpages | 8 | |
| dc.source.volume | 37 | |
| dc.title | GPI-tree search: algorithms for decision-time planning with the general policy improvement theorem | |
| dc.type | Journal article | |
| dspace.entity.type | Publication | |
| Files | Original bundle
| |
| Publication available in collections: |