دانلود مقاله ISI انگلیسی شماره 112675
ترجمه فارسی عنوان مقاله

انحراف سیاست تحت تحریم پاداش برای یادگیری تقویت چند هدف

عنوان انگلیسی
Policy invariance under reward transformations for multi-objective reinforcement learning
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
112675 2017 42 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Neurocomputing, Volume 263, 8 November 2017, Pages 60-73

ترجمه کلمات کلیدی
تقویت یادگیری، چند هدفه، مبتنی بر پتانسیل، شکل دادن به پاداش، سیستم های چندگانه،
کلمات کلیدی انگلیسی
Reinforcement learning; Multi-objective; Potential-based; Reward shaping; Multi-agent systems;
پیش نمایش مقاله
پیش نمایش مقاله  انحراف سیاست تحت تحریم پاداش برای یادگیری تقویت چند هدف

چکیده انگلیسی

Reinforcement Learning (RL) is a powerful and well-studied Machine Learning paradigm, where an agent learns to improve its performance in an environment by maximising a reward signal. In multi-objective Reinforcement Learning (MORL) the reward signal is a vector, where each component represents the performance on a different objective. Reward shaping is a well-established family of techniques that have been successfully used to improve the performance and learning speed of RL agents in single-objective problems. The basic premise of reward shaping is to add an additional shaping reward to the reward naturally received from the environment, to incorporate domain knowledge and guide an agent’s exploration. Potential-Based Reward Shaping (PBRS) is a specific form of reward shaping that offers additional guarantees. In this paper, we extend the theoretical guarantees of PBRS to MORL problems. Specifically, we provide theoretical proof that PBRS does not alter the true Pareto front in both single- and multi-agent MORL. We also contribute the first published empirical studies of the effect of PBRS in single- and multi-agent MORL problems.