Data-selective Advantage-weighted Method for Offline Reinforcement Learning
- Title
- Data-selective Advantage-weighted Method for Offline Reinforcement Learning
- Authors
- 김한결
- Date Issued
- 2023
- Publisher
- 포항공과대학교
- Abstract
- Behavior cloning (BC) has been considered as a practical policy constraint to alleviate the value overestimation problem from out-of-distribution (OOD) actions in the offline reinforcement learning (RL) setting. However, it has been reported that BC often suffers from insignificant policy update due to low-quality data. To overcome this problem, this paper proposes a data-selective approach to prescreen favorable data in advance before learning a policy. Positive advantage-related data is first selected to exploit the advantage function and is then applied to advantage-weighted method to further refine the policy. Finally, we present a new RL+BC algorithm, which combines RL with the proposed method and, practically, some implementation techniques are suggested to resolve the quality-quantity dilemma. The proposed algorithm outperforms the state-of-the-art algorithms on continuous control offline RL benchmark.
- URI
- http://postech.dcollection.net/common/orgView/200000690720
https://oasis.postech.ac.kr/handle/2014.oak/118424
- Article Type
- Thesis
- Files in This Item:
- There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.