3. Hierarchical and Interpretable Skill Acquisition
in Multi-task Reinforcement Learning
• ICLR2018 Poster: https://openreview.net/forum?id=SJJQVZW0b
• arXiv: https://arxiv.org/abs/1712.07294
• arXiv 版はデグレっている (2018/02/08現在) ので、ICLR 版推奨
• Official blog article
• https://einstein.ai/research/hierarchical-reinforcement-learning
• Authors:
• Tianmin Shu
• Univ. of California, Los Angeles
• (Intern at Salesforce Research)
• Caiming Xiong
• Richard Socher
• Salesforce Research
28. 参考文献
• Shu, T., Xiong, C., & Socher, R. (2018). Hierarchical and Interpretable Skill
Acquisition in Multi-task Reinforcement Learning. ICLR.
• Tessler, C., Givony, S., Zahavy, T., Mankowitz, D. J., & Mannor, S. (2017). A
Deep Hierarchical Approach to Lifelong Learning in Minecraft. AAAI.
• Berseth, G., Xie, C., Cernek, P., & Panne, M. Van de. (2018). Progressive
Reinforcement Learning with Distillation for Multi-Skilled Motion Control.
ICLR.
• Hermann, K. M., Hill, F., Green, S., Wang, F., Faulkner, R., Soyer, H., et al.
(2017). Grounded Language Learning in a Simulated 3D World. arXiv preprint.
• Rusu, A. A., Colmenarejo, S. G., Gulcehre, C., Desjardins, G., Kirkpatrick, J.,
Pascanu, R., et al. (2016). Policy Distillation. ICLR.
• Su, P.-H., Budzianowski, P., Ultes, S., Gasic, M., & Young, S. (2017). Sample-
efficient Actor-Critic Reinforcement Learning with Supervised Data for
Dialogue Management. ICLR.