SlideShare une entreprise Scribd logo
1  sur  30
Télécharger pour lire hors ligne
Dueling Network Architectures for
Deep Reinforcement Learning
2016-06-28
Taehoon Kim
Motivation
• Recent advances
• Design improved control and RL algorithms
• Incorporate existing NN into RL methods
• We,
• focus on innovating a NN that is better suited for model-free RL
• Separate
• the representation of state value
• (state-dependent) action advantages
2
Overview
3
state	value	function
advantage	function
sharing convolutional	feature	learning	module
aggregating	layer
state-action	value	function
Dueling network
• Single Q network with two streams
• Produce separate estimations of state value func and advantage func
• without any extra supervision
• which states are valuable?
• without having to learn the effect of each action for each state
4
Saliency map on the Atari game Enduro
5
1.	Focus	on	horizon,
where	new	cars	appear
2.	Focus	on	the	score
Not	pay	much	attention
when	there	are	no	cars	in	front
Attention	on	car	immediately	in	front
making	its	choice	of	action	very	relevant
Definitions
• Value 𝑉(𝑠), how good it is to be in particular state 𝑠
• Advantage 𝐴(𝑠, 𝑎)
• Policy 𝜋
• Return 𝑅* = ∑ 𝛾./*
𝑟.
∞
.1* , where 𝛾 ∈ [0,1]
• Q function 𝑄8
𝑠, 𝑎 = 𝔼 𝑅* 𝑠* = 𝑠, 𝑎* = 𝑎, 𝜋
• State-value function 𝑉8 𝑠 = 𝔼:~8(<)[𝑄8 𝑠, 𝑎 ]
6
Bellman equation
• Recursively with dynamic programming
• 𝑄8
𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾𝔼:`~8(<`) 𝑄8
𝑠`, 𝑎` 	|𝑠, 𝑎, 𝜋
• Optimal Q∗
𝑠, 𝑎 = max
8
𝑄8
𝑠, 𝑎
• Deterministic policy a = arg max
:`∈𝒜
Q∗
𝑠, 𝑎`
• Optimal V∗
𝑠	 = max
:
𝑄∗
𝑠, 𝑎
• Bellman equation Q∗
𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾 max
:`
𝑄∗
𝑠`, 𝑎` |𝑠, 𝑎
7
Advantage function
• Bellman equation Q∗
𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾 max
:`
𝑄∗
𝑠`, 𝑎` |𝑠, 𝑎
• Advantage function A8
𝑠, 𝑎 = 𝑄8
𝑠, 𝑎 − 𝑉8
(𝑠)
• 𝔼:~8(<`) 𝐴8
𝑠, 𝑎 = 0
8
Advantage function
• Value 𝑉(𝑠), how good it is to be in particular state 𝑠
• Q(𝑠, 𝑎), the value of choosing a particular action 𝑎 when in state 𝑠
• A = 𝑉 − 𝑄 to obtain a relative measure of importance of each action
9
Deep Q-network (DQN)
• Model Free
• states and rewards are produced by the environment
• Off policy
• states and rewards are obtained with a behavior policy (epsilon greedy)
• different from the online policy that is being learned
10
Deep Q-network: 1) Target network
• Deep Q-network 𝑄 𝑠, 𝑎; 𝜽
• Target network 𝑄 𝑠, 𝑎; 𝜽/
• 𝐿O 𝜃O = 𝔼<,:,Q,<` 𝑦O
STU
− 𝑄 𝑠, 𝑎; 𝜽𝒊
W
• 𝑦O
STU
= 𝑟 + 𝛾 max
:`
𝑄 𝑠`, 𝑎`; 𝜽/
• Freeze parameters for a fixed number of iterations
• 𝛻YZ
𝐿O 𝜃O = 𝔼<,:,Q,<` 𝑦O
STU
− 𝑄 𝑠, 𝑎; 𝜽𝒊 𝛻YZ
𝑄 𝑠, 𝑎; 𝜽𝒊
11
𝑠`
𝑠
Deep Q-network: 2) Experience memory
• Experience 𝑒* = (𝑠*, 𝑎*, 𝑟*, 𝑠*])
• Accumulates a dataset 𝒟* = 𝑒], 𝑒W, … , 𝑒*
• 𝐿O 𝜃O = 𝔼 <,:,Q,<` ~𝒰(𝒟) 𝑦O
STU
− 𝑄 𝑠, 𝑎; 𝜽𝒊
W
12
Double Deep Q-network (DDQN)
• In DQN
• the max operator uses the same values to both select and evaluate an action
• lead to overoptimistic value estimates
• 𝑦O
STU
= 𝑟 + 𝛾 max
:`
𝑄 𝑠`, 𝑎`; 𝜽/
• To migrate this problem, in DDQN
• 𝑦O
SSTU
= 𝑟 + 𝛾𝑄 𝑠`,arg max
:`
𝑄(𝑠`, 𝑎`; 𝜃O); 𝜽/
13
Prioritized Replay (Schaul et al., 2016)
• To increase the replay probability of experience tuples
• that have a high expected learning progress
• use importance sampling weight measured via the proxy of absolute TD-error
• sampling transitions with high absolute TD-errors
• Led to faster learning and to better final policy quality
14
Dueling Network Architecture : Key insight
• For many states
• unnecessary to estimate the value of each action choice
• For example, move left or right only matters when a collision is eminent
• In most of states, the choice of action has no affect on what happens
• For bootstrapping based algorithm
• the estimation of state value is of great importance for every state
• bootstrapping	: update estimates on the basis of other estimates.
15
Formulation
• A8
𝑠, 𝑎 = 𝑄8
𝑠, 𝑎 − 𝑉8
(𝑠)
• 𝑉8
𝑠 = 𝔼:~8(<) 𝑄8
𝑠, 𝑎
• A8
𝑠, 𝑎 = 𝑄8
𝑠, 𝑎 − 𝔼:~8(<) 𝑄8
𝑠, 𝑎
• 𝔼:~8(<) 𝐴8
𝑠, 𝑎 = 0
• For a deterministic policy, 𝑎∗
= argmax
a`∈𝒜
𝑄 𝑠, 𝑎`
• 𝑄 𝑠, 𝑎∗ = 𝑉(𝑠) and 𝐴 𝑠, 𝑎∗ = 0
16
Formulation
• Dueling network = CNN + fully-connected layers that output
• a scalar 𝑉 𝑠; 𝜃, 𝛽
• an 𝒜 -dimensional vector 𝐴(𝑠, 𝑎; 𝜃, 𝛼)
• Tempt to construct the aggregating module
• 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴(𝑠, 𝑎; 𝜃, 𝛼)
17
Aggregation module 1: simple add
• But 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 is only a parameterized estimate of the true Q-
function
• Unidentifiable
• Given 𝑄, 𝑉 and 𝐴 can’t uniquely be recovered
• We force the 𝐴 to have zero at the chosen action
• 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 − max
:`∈𝒜
𝐴(𝑠, 𝑎`; 𝜃, 𝛼)
18
Aggregation module 2: subtract max
• For 𝑎∗
= argmax
a`∈𝒜
𝑄(𝑠, 𝑎`; 𝜃, 𝛼, 𝛽) = argmax
a`∈𝒜
𝐴 𝑠, 𝑎; 𝜃, 𝛼
• obtain 𝑄 𝑠, 𝑎∗
; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽
• 𝑄 𝑠, 𝑎∗
= 𝑉(𝑠)
• An alternative module replace max operator with an average
• 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 −
]
𝒜
∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):`
19
Aggregation module 3: subtract average
• An alternative module replace max operator with an average
• 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 −
]
𝒜
∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):`
• Now loses the original semantics of 𝑉 and 𝐴
• because now off-target by a constant,
]
𝒜
∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):`
• But increases the stability of the optimization
• 𝐴 only need to change as fast as the mean
• Instead of having to compensate any change to the optimal action’s advantage
20
max
:`∈𝒜
𝐴(𝑠, 𝑎`; 𝜃, 𝛼)
Aggregation module 3: subtract average
• Subtracting mean is the best
• helps identifiability
• not change the relative rank of 𝐴 (and hence Q)
• Aggregation module is a part of the network not a algorithmic step
• training of dueling network requires only back-propagation
21
Compatibility
• Because the output of dueling network is Q function
• DQN
• DDQN
• SARSA
• On-policy, off-policy, whatever
22
Definition: Generalized policy iteration
23
Experiments: Policy evaluation
• Useful for evaluating network architecture
• devoid of confounding factors such as choice of exploration strategy, and
interaction between policy improvement and policy evaluation
• In experiment, employ temporal difference learning
• optimizing 𝑦O = 𝑟 + 𝛾𝔼:`~8(<`) 𝑄 𝑠`, 𝑎`, 𝜃O
• Corridor environment
• exact 𝑄8
(𝑠, 𝑎) can be computed separately for all 𝑠, 𝑎 ∈ 𝒮×𝒜
24
Experiments: Policy evaluation
• Test for 5, 10, and 20 actions (first tackled by DDQN)
• The stream 𝑽 𝒔; 𝜽, 𝜷 learn a general value shared across many
similar actions at 𝑠
• Hence leading to faster convergence
25
Performance	gap	increasing	with	the	number	of	actions
Experiments: General Atari Game-Playing
• Similar to DQN (Mnih et al., 2015) and add fully-connected layers
• Rescale the combined gradient entering the last convolutional layer
by 1/ 2, which mildly increases stability
• Clipped gradients to their norm less than or equal to 10
• clipping is not a standard practice in RL
26
Performance: Up-to 30 no-op random start
• Duel Clip > Single Clip > Single
• Good job Dueling network
27
Performance: Human start
28
• Not necessarily have to generalize well to play the Atari games
• Can achieve good performance by simply remembering sequences of
actions
• To obtain a more robust measure that use 100 starting points
sampled from a human expert’s trajectory
• from each starting points, evaluate up to 108,000 frames
• again, good job Dueling network
Combining with Prioritized Experience Replay
• Prioritization and the dueling architecture address very different
aspects of the learning process
• Although orthogonal in their objectives, these extensions
(prioritization, dueling and gradient clipping) interact in subtle ways
• Prioritization interacts with gradient clipping
• Sampling transitions with high absolute TD-errors more often leads to gradients with higher
norms so re-tuned the hyper parameters
29
References
1. [Wang, 2015] Wang, Z., de Freitas, N., & Lanctot, M. (2015). Dueling network architectures for
deep reinforcement learning. arXiv preprint arXiv:1511.06581.
2. [Van, 2015] Van Hasselt, H., Guez, A., & Silver, D. (2015). Deep reinforcement learning with
double Q-learning. CoRR, abs/1509.06461.
3. [Schaul, 2015] Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2015). Prioritized experience
replay. arXiv preprint arXiv:1511.05952.
4. [Sutton, 1998] Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction(Vol.
1, No. 1). Cambridge: MIT press.
30

Contenu connexe

Tendances

Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Kishor Datta Gupta
 

Tendances (20)

Lec3 dqn
Lec3 dqnLec3 dqn
Lec3 dqn
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learning
 
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016
텐서플로우 설치도 했고 튜토리얼도 봤고 기초 예제도 짜봤다면 TensorFlow KR Meetup 2016
 
Reinforcement Learning 2. Multi-armed Bandits
Reinforcement Learning 2. Multi-armed BanditsReinforcement Learning 2. Multi-armed Bandits
Reinforcement Learning 2. Multi-armed Bandits
 
Deep sarsa, Deep Q-learning, DQN
Deep sarsa, Deep Q-learning, DQNDeep sarsa, Deep Q-learning, DQN
Deep sarsa, Deep Q-learning, DQN
 
deep reinforcement learning with double q learning
deep reinforcement learning with double q learningdeep reinforcement learning with double q learning
deep reinforcement learning with double q learning
 
Deep Reinforcement Learning: Q-Learning
Deep Reinforcement Learning: Q-LearningDeep Reinforcement Learning: Q-Learning
Deep Reinforcement Learning: Q-Learning
 
LeNet-5
LeNet-5LeNet-5
LeNet-5
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Deep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsDeep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its Applications
 
Tutorial on Deep Learning in Recommender System, Lars summer school 2019
Tutorial on Deep Learning in Recommender System, Lars summer school 2019Tutorial on Deep Learning in Recommender System, Lars summer school 2019
Tutorial on Deep Learning in Recommender System, Lars summer school 2019
 
Temporal difference learning
Temporal difference learningTemporal difference learning
Temporal difference learning
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learning
 
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
 
Reinforcement Learning 5. Monte Carlo Methods
Reinforcement Learning 5. Monte Carlo MethodsReinforcement Learning 5. Monte Carlo Methods
Reinforcement Learning 5. Monte Carlo Methods
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Deep Learning for Recommender Systems
Deep Learning for Recommender SystemsDeep Learning for Recommender Systems
Deep Learning for Recommender Systems
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
 
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기
[2017 PYCON 튜토리얼]OpenAI Gym을 이용한 강화학습 에이전트 만들기
 

Similaire à Dueling network architectures for deep reinforcement learning

Parallel Machine Learning- DSGD and SystemML
Parallel Machine Learning- DSGD and SystemMLParallel Machine Learning- DSGD and SystemML
Parallel Machine Learning- DSGD and SystemML
Janani C
 
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Ryo Takahashi
 
Final Presentation - Edan&Itzik
Final Presentation - Edan&ItzikFinal Presentation - Edan&Itzik
Final Presentation - Edan&Itzik
itzik cohen
 

Similaire à Dueling network architectures for deep reinforcement learning (20)

Paper study: Learning to solve circuit sat
Paper study: Learning to solve circuit satPaper study: Learning to solve circuit sat
Paper study: Learning to solve circuit sat
 
Parallel Machine Learning- DSGD and SystemML
Parallel Machine Learning- DSGD and SystemMLParallel Machine Learning- DSGD and SystemML
Parallel Machine Learning- DSGD and SystemML
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and Regularization
 
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...NS-CUK Seminar: H.E.Lee,  Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
 
PR-305: Exploring Simple Siamese Representation Learning
PR-305: Exploring Simple Siamese Representation LearningPR-305: Exploring Simple Siamese Representation Learning
PR-305: Exploring Simple Siamese Representation Learning
 
A framework for nonlinear model predictive control
A framework for nonlinear model predictive controlA framework for nonlinear model predictive control
A framework for nonlinear model predictive control
 
DQN Variants: A quick glance
DQN Variants: A quick glanceDQN Variants: A quick glance
DQN Variants: A quick glance
 
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
 
Paper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipelinePaper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipeline
 
Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!
 
Iiwas19 yamazaki slide
Iiwas19 yamazaki slideIiwas19 yamazaki slide
Iiwas19 yamazaki slide
 
An Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGIAn Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGI
 
SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15SPICE-MATEX @ DAC15
SPICE-MATEX @ DAC15
 
K-means and GMM
K-means and GMMK-means and GMM
K-means and GMM
 
Introduction to Genetic algorithm and its significance in VLSI design and aut...
Introduction to Genetic algorithm and its significance in VLSI design and aut...Introduction to Genetic algorithm and its significance in VLSI design and aut...
Introduction to Genetic algorithm and its significance in VLSI design and aut...
 
230727_HB_JointJournalClub.pptx
230727_HB_JointJournalClub.pptx230727_HB_JointJournalClub.pptx
230727_HB_JointJournalClub.pptx
 
Final Presentation - Edan&Itzik
Final Presentation - Edan&ItzikFinal Presentation - Edan&Itzik
Final Presentation - Edan&Itzik
 

Plus de Taehoon Kim

강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction
Taehoon Kim
 

Plus de Taehoon Kim (14)

LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...
LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...
LLM에서 배우는 이미지 생성 모델 ZERO부터 학습하기 Training Large-Scale Diffusion Model from Scr...
 
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링
상상을 현실로 만드는, 이미지 생성 모델을 위한 엔지니어링
 
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018
머신러닝 해외 취업 준비: 닳고 닳은 이력서와 고통스러웠던 면접을 돌아보며 SNU 2018
 
Random Thoughts on Paper Implementations [KAIST 2018]
Random Thoughts on Paper Implementations [KAIST 2018]Random Thoughts on Paper Implementations [KAIST 2018]
Random Thoughts on Paper Implementations [KAIST 2018]
 
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017
책 읽어주는 딥러닝: 배우 유인나가 해리포터를 읽어준다면 DEVIEW 2017
 
알아두면 쓸데있는 신기한 강화학습 NAVER 2017
알아두면 쓸데있는 신기한 강화학습 NAVER 2017알아두면 쓸데있는 신기한 강화학습 NAVER 2017
알아두면 쓸데있는 신기한 강화학습 NAVER 2017
 
카카오톡으로 여친 만들기 2013.06.29
카카오톡으로 여친 만들기 2013.06.29카카오톡으로 여친 만들기 2013.06.29
카카오톡으로 여친 만들기 2013.06.29
 
Differentiable Neural Computer
Differentiable Neural ComputerDifferentiable Neural Computer
Differentiable Neural Computer
 
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
 
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016
지적 대화를 위한 깊고 넓은 딥러닝 PyCon APAC 2016
 
강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction
 
Deep Reasoning
Deep ReasoningDeep Reasoning
Deep Reasoning
 
쉽게 쓰여진 Django
쉽게 쓰여진 Django쉽게 쓰여진 Django
쉽게 쓰여진 Django
 
영화 서비스에 대한 생각
영화 서비스에 대한 생각영화 서비스에 대한 생각
영화 서비스에 대한 생각
 

Dernier

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Dernier (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 

Dueling network architectures for deep reinforcement learning

  • 1. Dueling Network Architectures for Deep Reinforcement Learning 2016-06-28 Taehoon Kim
  • 2. Motivation • Recent advances • Design improved control and RL algorithms • Incorporate existing NN into RL methods • We, • focus on innovating a NN that is better suited for model-free RL • Separate • the representation of state value • (state-dependent) action advantages 2
  • 4. Dueling network • Single Q network with two streams • Produce separate estimations of state value func and advantage func • without any extra supervision • which states are valuable? • without having to learn the effect of each action for each state 4
  • 5. Saliency map on the Atari game Enduro 5 1. Focus on horizon, where new cars appear 2. Focus on the score Not pay much attention when there are no cars in front Attention on car immediately in front making its choice of action very relevant
  • 6. Definitions • Value 𝑉(𝑠), how good it is to be in particular state 𝑠 • Advantage 𝐴(𝑠, 𝑎) • Policy 𝜋 • Return 𝑅* = ∑ 𝛾./* 𝑟. ∞ .1* , where 𝛾 ∈ [0,1] • Q function 𝑄8 𝑠, 𝑎 = 𝔼 𝑅* 𝑠* = 𝑠, 𝑎* = 𝑎, 𝜋 • State-value function 𝑉8 𝑠 = 𝔼:~8(<)[𝑄8 𝑠, 𝑎 ] 6
  • 7. Bellman equation • Recursively with dynamic programming • 𝑄8 𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾𝔼:`~8(<`) 𝑄8 𝑠`, 𝑎` |𝑠, 𝑎, 𝜋 • Optimal Q∗ 𝑠, 𝑎 = max 8 𝑄8 𝑠, 𝑎 • Deterministic policy a = arg max :`∈𝒜 Q∗ 𝑠, 𝑎` • Optimal V∗ 𝑠 = max : 𝑄∗ 𝑠, 𝑎 • Bellman equation Q∗ 𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾 max :` 𝑄∗ 𝑠`, 𝑎` |𝑠, 𝑎 7
  • 8. Advantage function • Bellman equation Q∗ 𝑠, 𝑎 = 𝔼<` 𝑟 + 𝛾 max :` 𝑄∗ 𝑠`, 𝑎` |𝑠, 𝑎 • Advantage function A8 𝑠, 𝑎 = 𝑄8 𝑠, 𝑎 − 𝑉8 (𝑠) • 𝔼:~8(<`) 𝐴8 𝑠, 𝑎 = 0 8
  • 9. Advantage function • Value 𝑉(𝑠), how good it is to be in particular state 𝑠 • Q(𝑠, 𝑎), the value of choosing a particular action 𝑎 when in state 𝑠 • A = 𝑉 − 𝑄 to obtain a relative measure of importance of each action 9
  • 10. Deep Q-network (DQN) • Model Free • states and rewards are produced by the environment • Off policy • states and rewards are obtained with a behavior policy (epsilon greedy) • different from the online policy that is being learned 10
  • 11. Deep Q-network: 1) Target network • Deep Q-network 𝑄 𝑠, 𝑎; 𝜽 • Target network 𝑄 𝑠, 𝑎; 𝜽/ • 𝐿O 𝜃O = 𝔼<,:,Q,<` 𝑦O STU − 𝑄 𝑠, 𝑎; 𝜽𝒊 W • 𝑦O STU = 𝑟 + 𝛾 max :` 𝑄 𝑠`, 𝑎`; 𝜽/ • Freeze parameters for a fixed number of iterations • 𝛻YZ 𝐿O 𝜃O = 𝔼<,:,Q,<` 𝑦O STU − 𝑄 𝑠, 𝑎; 𝜽𝒊 𝛻YZ 𝑄 𝑠, 𝑎; 𝜽𝒊 11 𝑠` 𝑠
  • 12. Deep Q-network: 2) Experience memory • Experience 𝑒* = (𝑠*, 𝑎*, 𝑟*, 𝑠*]) • Accumulates a dataset 𝒟* = 𝑒], 𝑒W, … , 𝑒* • 𝐿O 𝜃O = 𝔼 <,:,Q,<` ~𝒰(𝒟) 𝑦O STU − 𝑄 𝑠, 𝑎; 𝜽𝒊 W 12
  • 13. Double Deep Q-network (DDQN) • In DQN • the max operator uses the same values to both select and evaluate an action • lead to overoptimistic value estimates • 𝑦O STU = 𝑟 + 𝛾 max :` 𝑄 𝑠`, 𝑎`; 𝜽/ • To migrate this problem, in DDQN • 𝑦O SSTU = 𝑟 + 𝛾𝑄 𝑠`,arg max :` 𝑄(𝑠`, 𝑎`; 𝜃O); 𝜽/ 13
  • 14. Prioritized Replay (Schaul et al., 2016) • To increase the replay probability of experience tuples • that have a high expected learning progress • use importance sampling weight measured via the proxy of absolute TD-error • sampling transitions with high absolute TD-errors • Led to faster learning and to better final policy quality 14
  • 15. Dueling Network Architecture : Key insight • For many states • unnecessary to estimate the value of each action choice • For example, move left or right only matters when a collision is eminent • In most of states, the choice of action has no affect on what happens • For bootstrapping based algorithm • the estimation of state value is of great importance for every state • bootstrapping : update estimates on the basis of other estimates. 15
  • 16. Formulation • A8 𝑠, 𝑎 = 𝑄8 𝑠, 𝑎 − 𝑉8 (𝑠) • 𝑉8 𝑠 = 𝔼:~8(<) 𝑄8 𝑠, 𝑎 • A8 𝑠, 𝑎 = 𝑄8 𝑠, 𝑎 − 𝔼:~8(<) 𝑄8 𝑠, 𝑎 • 𝔼:~8(<) 𝐴8 𝑠, 𝑎 = 0 • For a deterministic policy, 𝑎∗ = argmax a`∈𝒜 𝑄 𝑠, 𝑎` • 𝑄 𝑠, 𝑎∗ = 𝑉(𝑠) and 𝐴 𝑠, 𝑎∗ = 0 16
  • 17. Formulation • Dueling network = CNN + fully-connected layers that output • a scalar 𝑉 𝑠; 𝜃, 𝛽 • an 𝒜 -dimensional vector 𝐴(𝑠, 𝑎; 𝜃, 𝛼) • Tempt to construct the aggregating module • 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴(𝑠, 𝑎; 𝜃, 𝛼) 17
  • 18. Aggregation module 1: simple add • But 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 is only a parameterized estimate of the true Q- function • Unidentifiable • Given 𝑄, 𝑉 and 𝐴 can’t uniquely be recovered • We force the 𝐴 to have zero at the chosen action • 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 − max :`∈𝒜 𝐴(𝑠, 𝑎`; 𝜃, 𝛼) 18
  • 19. Aggregation module 2: subtract max • For 𝑎∗ = argmax a`∈𝒜 𝑄(𝑠, 𝑎`; 𝜃, 𝛼, 𝛽) = argmax a`∈𝒜 𝐴 𝑠, 𝑎; 𝜃, 𝛼 • obtain 𝑄 𝑠, 𝑎∗ ; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 • 𝑄 𝑠, 𝑎∗ = 𝑉(𝑠) • An alternative module replace max operator with an average • 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 − ] 𝒜 ∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):` 19
  • 20. Aggregation module 3: subtract average • An alternative module replace max operator with an average • 𝑄 𝑠, 𝑎; 𝜃, 𝛼, 𝛽 = 𝑉 𝑠; 𝜃, 𝛽 + 𝐴 𝑠, 𝑎; 𝜃, 𝛼 − ] 𝒜 ∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):` • Now loses the original semantics of 𝑉 and 𝐴 • because now off-target by a constant, ] 𝒜 ∑ 𝐴(𝑠, 𝑎`; 𝜃, 𝛼):` • But increases the stability of the optimization • 𝐴 only need to change as fast as the mean • Instead of having to compensate any change to the optimal action’s advantage 20 max :`∈𝒜 𝐴(𝑠, 𝑎`; 𝜃, 𝛼)
  • 21. Aggregation module 3: subtract average • Subtracting mean is the best • helps identifiability • not change the relative rank of 𝐴 (and hence Q) • Aggregation module is a part of the network not a algorithmic step • training of dueling network requires only back-propagation 21
  • 22. Compatibility • Because the output of dueling network is Q function • DQN • DDQN • SARSA • On-policy, off-policy, whatever 22
  • 24. Experiments: Policy evaluation • Useful for evaluating network architecture • devoid of confounding factors such as choice of exploration strategy, and interaction between policy improvement and policy evaluation • In experiment, employ temporal difference learning • optimizing 𝑦O = 𝑟 + 𝛾𝔼:`~8(<`) 𝑄 𝑠`, 𝑎`, 𝜃O • Corridor environment • exact 𝑄8 (𝑠, 𝑎) can be computed separately for all 𝑠, 𝑎 ∈ 𝒮×𝒜 24
  • 25. Experiments: Policy evaluation • Test for 5, 10, and 20 actions (first tackled by DDQN) • The stream 𝑽 𝒔; 𝜽, 𝜷 learn a general value shared across many similar actions at 𝑠 • Hence leading to faster convergence 25 Performance gap increasing with the number of actions
  • 26. Experiments: General Atari Game-Playing • Similar to DQN (Mnih et al., 2015) and add fully-connected layers • Rescale the combined gradient entering the last convolutional layer by 1/ 2, which mildly increases stability • Clipped gradients to their norm less than or equal to 10 • clipping is not a standard practice in RL 26
  • 27. Performance: Up-to 30 no-op random start • Duel Clip > Single Clip > Single • Good job Dueling network 27
  • 28. Performance: Human start 28 • Not necessarily have to generalize well to play the Atari games • Can achieve good performance by simply remembering sequences of actions • To obtain a more robust measure that use 100 starting points sampled from a human expert’s trajectory • from each starting points, evaluate up to 108,000 frames • again, good job Dueling network
  • 29. Combining with Prioritized Experience Replay • Prioritization and the dueling architecture address very different aspects of the learning process • Although orthogonal in their objectives, these extensions (prioritization, dueling and gradient clipping) interact in subtle ways • Prioritization interacts with gradient clipping • Sampling transitions with high absolute TD-errors more often leads to gradients with higher norms so re-tuned the hyper parameters 29
  • 30. References 1. [Wang, 2015] Wang, Z., de Freitas, N., & Lanctot, M. (2015). Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581. 2. [Van, 2015] Van Hasselt, H., Guez, A., & Silver, D. (2015). Deep reinforcement learning with double Q-learning. CoRR, abs/1509.06461. 3. [Schaul, 2015] Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2015). Prioritized experience replay. arXiv preprint arXiv:1511.05952. 4. [Sutton, 1998] Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction(Vol. 1, No. 1). Cambridge: MIT press. 30