SlideShare une entreprise Scribd logo
1  sur  81
Télécharger pour lire hors ligne
Multi-armed Bandits
Dongmin Lee
RLI Study
Reference
- Reinforcement Learning : An Introduction,
Richard S. Sutton & Andrew G. Barto
(Link)
- 멀티 암드 밴딧(Multi-Armed Bandits), 송호연
(https://brunch.co.kr/@chris-song/62)
Index
1. Why do you need to know Multi-armed Bandits?
2. A k-armed Bandit Problem
3. Simple-average Action-value Methods
4. A simple Bandit Algorithm
5. Gradient Bandit Algorithm
6. Summary
1. Why do you need to know
Multi-armed Bandits(MAB)?
1. Why do you need to know MAB?
- Reinforcement Learning(RL) uses training information that
‘Evaluate’(‘Instruct’ X) the actions
- Evaluative feedback indicates ‘How good the action taken was’
- Because of simplicity, ‘Nonassociative’ one situation
- Most prior work involving evaluative feedback
- ‘Nonassociative’, ‘Evaluative feedback’ -> MAB
- In order to Introduce basic learning methods in later chapters
1. Why do you need to know MAB?
In my opinion,
- I think we can’t seem to know RL without knowing MAB.
- MAB deal with ‘Exploitation & Exploration’ of the core ideas in RL.
- In the full reinforcement learning problem, MAB is always used.
- In every profession, MAB is very useful.
2. A k-armed Bandit Problem
2. A k-armed Bandit Problem
Do you know what MAB is?
Do you know what MAB is?
Source : Multi-Armed Bandit
Image source : https://brunch.co.kr/@chris-song/62
- Slot Machine -> Bandit
- Slot Machine’s lever -> Armed
- N slot Machine
 Multi-armed Bandits
Do you know what MAB is?
Source : Multi-Armed Bandit
Image source : https://brunch.co.kr/@chris-song/62
Among the various slot machines,
which slot machine
should I put my money on
and lower the lever?
Do you know what MAB is?
Source : Multi-Armed Bandit
Image source : https://brunch.co.kr/@chris-song/62
How can you make the best return
on your investment?
Do you know what MAB is?
Source : Multi-Armed Bandit
Image source : https://brunch.co.kr/@chris-song/62
MAB is a algorithm created to
optimize investment in slot machines
2. A k-armed Bandit Problem
A K-armed Bandit Problem
A K-armed Bandit Problem
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
t – Discrete time step or play number
𝐴 𝑡 - Action at time t
𝑅𝑡 - Reward at time t
𝑞∗ 𝑎 – True value (expected reward) of action a
A K-armed Bandit Problem
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
In our k-armed bandit problem, each of the k actions has
an expected or mean reward given that that action is selected;
let us call this the value of that action.
3. Simple-average Action-value Methods
3. Simple-average Action-value Methods
Simple-average Method
Simple-average Method
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
𝑄𝑡(𝑎) converges to 𝑞∗(𝑎)
3. Simple-average Action-value Methods
Action-value Methods
Action-value Methods
Action-value Methods
- Greedy Action Selection Method
- 𝜀-greedy Action Selection Method
- Upper-Confidence-Bound(UCB) Action Selection Method
Action-value Methods
Action-value Methods
- Greedy Action Selection Method
- 𝜀-greedy Action Selection Method
- Upper-Confidence-Bound(UCB) Action Selection Method
Greedy Action Selection Method
𝑎𝑟𝑔𝑚𝑎𝑥 𝑎 𝑓(𝑎) - a value of 𝑎 at which 𝑓(𝑎) takes its maximal value
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Greedy Action Selection Method
Greedy action selection always exploits
current knowledge to maximize immediate reward
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Greedy Action Selection Method
Greedy Action Selection Method’s disadvantage
Is it a good idea to select greedy action,
exploit that action selection
and maximize the current immediate reward?
Action-value Methods
Action-value Methods
- Greedy Action Selection Method
- 𝜀-greedy Action Selection Method
- Upper-Confidence-Bound(UCB) Action Selection Method
𝜀-greedy Action Selection Method
Exploitation is the right thing to do to maximize
the expected reward on the one step,
but Exploration may produce the greater total reward in the long run.
𝜀-greedy Action Selection Method
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
𝜀 – probability of taking a random action in an 𝜀-greedy policy
𝜀-greedy Action Selection Method
Exploitation
Exploration
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Action-value Methods
Action-value Methods
- Greedy Action Selection Method
- 𝜀-greedy Action Selection Method
- Upper-Confidence-Bound(UCB) Action Selection Method
Upper-Confidence-Bound(UCB) Action Selection Method
ln 𝑡 - natural logarithm of 𝑡
𝑁𝑡(𝑎) – the number of times that action 𝑎 has been selected prior to time 𝑡
𝑐 – the number 𝑐 > 0 controls the degree of exploration
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Upper-Confidence-Bound(UCB) Action Selection Method
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
The idea of this UCB action selection is that
The square-root term is a measure of
the uncertainty(or potential) in the estimate of 𝑎’s value
The probability that the slot machine
may be the optimal slot machine
Upper-Confidence-Bound(UCB) Action Selection Method
UCB Action Selection Method’s disadvantage
UCB is more difficult than 𝜀-greedy to extend beyond bandits
to the more general reinforcement learning settings
One difficulty is in dealing with nonstationary problems
Another difficulty is dealing with large state spaces
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
4. A simple Bandit Algorithm
4. A simple Bandit Algorithm
- Incremental Implementation
- Tracking a Nonstationary Problem
4. A simple Bandit Algorithm
- Incremental Implementation
- Tracking a Nonstationary Problem
Incremental Implementation
𝑄 𝑛 denote the estimate of 𝑅 𝑛−1‘s action value
after 𝑅 𝑛−1 has been selected n − 1 times
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
𝑄 𝑛+1 = 𝑄 𝑛 +
1
𝑛
𝑅 𝑛 − 𝑄 𝑛
holds even for 𝑛 = 1,
obtaining 𝑄2 = 𝑅1 for arbitrary 𝑄1
Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Unstable(↔Constant)
Available on stationary problem
Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
The expression 𝑇𝑎𝑟𝑔𝑒𝑡 − 𝑂𝑙𝑑𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒 is an error in the estimate.
The target is presumed to indicate a desirable direction
in which to move, though it may be noisy.
4. A simple Bandit Algorithm
- Incremental Implementation
- Tracking a Nonstationary Problem
Tracking a Nonstationary Problem
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Tracking a Nonstationary Problem
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Why do you think it should be changed from
1
𝑛
to 𝛼?
Tracking a Nonstationary Problem
Why do you think it should be changed from
1
𝑛
to 𝛼?
We often encounter RL problems that are effectively nonstationary.
In such cases it makes sense to give more weight
to recent rewards than to long-past rewards.
One of the most popular ways of doing this is to use
a constant step-size parameter.
The step-size parameter 𝛼 ∈ (0,1] is constant.
Tracking a Nonstationary Problem
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Tracking a Nonstationary Problem
?
= 𝑄 𝑛 + 𝛼𝑅 𝑛 − 𝛼𝑄 𝑛
= 𝛼𝑅 𝑛 + (1 − 𝛼)𝑄 𝑛
𝑄 𝑛 = 𝛼𝑅 𝑛−1 + 1 − 𝛼 𝑄 𝑛−1 ?
∴ 𝑄 𝑛 = 𝛼𝑅 𝑛−1 + (1 − 𝛼)𝑄 𝑛−1
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Tracking a Nonstationary Problem
?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Tracking a Nonstationary Problem
∴ 𝑄 𝑛−1 = 𝛼 𝑅 𝑛−2 + 1 − 𝛼 𝑅 𝑛−3 + ⋯ + 1 − 𝛼 𝑛−3 𝑅1 + 1 − 𝛼 𝑛−2 𝑄1
1 − 𝛼 2
𝑄 𝑛−1 = 1 − 𝛼 2
𝛼𝑅 𝑛−2 + 1 − 𝛼 3
𝛼𝑅 𝑛−3 + ⋯
+ 1 − 𝛼 𝑛−1
𝛼𝑅1 + 1 − 𝛼 𝑛
𝑄1 ?
= 1 − 𝛼 2{𝛼𝑅 𝑛−2 + 1 − 𝛼 𝛼𝑅 𝑛−3 + ⋯
+ 1 − 𝛼 𝑛−3 𝛼𝑅1 + 1 − 𝛼 𝑛−2 𝑄1}
Tracking a Nonstationary Problem
Sequences of step-size parameters often converge
very slowly or need considerable tuning
in order to obtain a satisfactory convergence rate.
Thus, step-size parameters should be tuned effectively.
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
5. Gradient Bandit Algorithm
5. Gradient Bandit Algorithm
In addition to a simple bandit algorithm,
there is another way to use the gradient method as a bandit algorithm
5. Gradient Bandit Algorithm
We consider learning a numerical 𝑝𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒
for each action 𝑎, which we denote 𝐻𝑡(𝑎).
The larger the preference, the more often that action is taken,
but the preference has no interpretation in terms of reward.
In other wards, just because the preference(𝐻𝑡(𝑎)) is large,
the reward is not unconditionally large.
However, if the reward is large, It can affect the preference(𝐻𝑡(𝑎))
5. Gradient Bandit Algorithm
The action probabilities are determined according to
a 𝑠𝑜𝑓𝑡 − 𝑚𝑎𝑥 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 (i.e., Gibbs or Boltzmann distribution)
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
5. Gradient Bandit Algorithm
𝜋 𝑡(𝑎) – Probability of selecting action 𝑎 at time 𝑡
Initially all action preferences are the same
so that all actions have an equal probability of being selected.
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
5. Gradient Bandit Algorithm
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
There is a natural learning algorithm for this setting
based on the idea of stochastic gradient ascent.
On each step, after selecting action 𝐴 𝑡 and receiving the reward 𝑅𝑡,
the action preferences are updated.
Selected action 𝐴 𝑡
Non-selected actions
5. Gradient Bandit Algorithm
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
1) What does 𝑅𝑡 mean?
2) What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
5. Gradient Bandit Algorithm
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
1) What does 𝑅𝑡 mean?
2) What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
?
What does 𝑅𝑡 mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
𝑅𝑡 ∈ R is the average of all the rewards.
The 𝑅𝑡 term serves as a baseline.
If the reward is higher than the baseline,
then the probability of taking 𝐴 𝑡 in the future is increased,
and if the reward is below baseline, then probability is decreased.
The non-selected actions move in the opposite direction.
5. Gradient Bandit Algorithm
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
1) What does 𝑅𝑡 mean?
2) What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
?
What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
- Stochastic approximation to gradient ascent
in Bandit Gradient Algorithm
- Expected reward
- Expected reward by Low of total expectationE[𝑅𝑡] = E[ E 𝑅𝑡 𝐴 𝑡 ]
What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
?
What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
?
The gradient sums to zero over all the actions, σ 𝑥
𝜕𝜋 𝑡(𝑥)
𝜕𝐻𝑡(𝑎)
= 0
– as 𝐻𝑡(𝑎) is changed, some actions’ probabilities go up
and some go down, but the sum of the changes must be
zero because the sum of the probabilities is always one.
What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
?
E[𝑅𝑡] = E[ E 𝑅𝑡 𝐴 𝑡 ]
What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Please refer page 40 in link of reference slide
What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
∴
We can substitute a sample of the expectation above
for the performance gradient in 𝐻𝑡+1 𝑎 ≅ 𝐻𝑡 𝑎 + 𝛼
𝜕E[𝑅 𝑡]
𝜕𝐻𝑡(𝑎)
6. Summary
6. Summary
In this chapter, ‘Exploitation & Exploration‘ is the core idea.
6. Summary
Action-value Methods
- Greedy Action Selection Method
- 𝜀-greedy Action Selection Method
- Upper-Confidence-Bound(UCB) Action Selection Method
6. Summary
A simple bandit algorithm : Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
6. Summary
A simple bandit algorithm : Incremental Implementation
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
6. Summary
A simple bandit algorithm : Tracking a Nonstationary Problem
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
6. Summary
A simple bandit algorithm : Tracking a Nonstationary Problem
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
6. Summary
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
Gradient Bandit Algorithm
6. Summary
Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto
Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
A parameter study of the various bandit algorithms
Reinforcement Learning is LOVE♥
Thank you

Contenu connexe

Tendances

A Multi-Armed Bandit Framework For Recommendations at Netflix
A Multi-Armed Bandit Framework For Recommendations at NetflixA Multi-Armed Bandit Framework For Recommendations at Netflix
A Multi-Armed Bandit Framework For Recommendations at NetflixJaya Kawale
 
Reinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners TutorialReinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners TutorialOmar Enayet
 
Intro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningIntro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningKhaled Saleh
 
Deep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsDeep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsBill Liu
 
Reinforcement Learning 2. Multi-armed Bandits
Reinforcement Learning 2. Multi-armed BanditsReinforcement Learning 2. Multi-armed Bandits
Reinforcement Learning 2. Multi-armed BanditsSeung Jae Lee
 
Markov decision process
Markov decision processMarkov decision process
Markov decision processHamed Abdi
 
DQN (Deep Q-Network)
DQN (Deep Q-Network)DQN (Deep Q-Network)
DQN (Deep Q-Network)Dong Guo
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learningDing Li
 
An introduction to reinforcement learning
An introduction to  reinforcement learningAn introduction to  reinforcement learning
An introduction to reinforcement learningJie-Han Chen
 
Deep reinforcement learning from scratch
Deep reinforcement learning from scratchDeep reinforcement learning from scratch
Deep reinforcement learning from scratchJie-Han Chen
 
Reinforcement learning
Reinforcement learning Reinforcement learning
Reinforcement learning Chandra Meena
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement LearningSalem-Kabbani
 
Markov decision process
Markov decision processMarkov decision process
Markov decision processJie-Han Chen
 
Reinforcement Learning 5. Monte Carlo Methods
Reinforcement Learning 5. Monte Carlo MethodsReinforcement Learning 5. Monte Carlo Methods
Reinforcement Learning 5. Monte Carlo MethodsSeung Jae Lee
 
Reinforcement learning
Reinforcement  learningReinforcement  learning
Reinforcement learningSKS
 
Introduction to Multi-armed Bandits
Introduction to Multi-armed BanditsIntroduction to Multi-armed Bandits
Introduction to Multi-armed BanditsYan Xu
 
Introduction to Few shot learning
Introduction to Few shot learningIntroduction to Few shot learning
Introduction to Few shot learningRidge-i, Inc.
 
Introduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement LearningIntroduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement LearningNAVER Engineering
 
Rl chapter 1 introduction
Rl chapter 1 introductionRl chapter 1 introduction
Rl chapter 1 introductionConnorShorten2
 

Tendances (20)

A Multi-Armed Bandit Framework For Recommendations at Netflix
A Multi-Armed Bandit Framework For Recommendations at NetflixA Multi-Armed Bandit Framework For Recommendations at Netflix
A Multi-Armed Bandit Framework For Recommendations at Netflix
 
multi-armed bandit
multi-armed banditmulti-armed bandit
multi-armed bandit
 
Reinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners TutorialReinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners Tutorial
 
Intro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningIntro to Deep Reinforcement Learning
Intro to Deep Reinforcement Learning
 
Deep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsDeep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its Applications
 
Reinforcement Learning 2. Multi-armed Bandits
Reinforcement Learning 2. Multi-armed BanditsReinforcement Learning 2. Multi-armed Bandits
Reinforcement Learning 2. Multi-armed Bandits
 
Markov decision process
Markov decision processMarkov decision process
Markov decision process
 
DQN (Deep Q-Network)
DQN (Deep Q-Network)DQN (Deep Q-Network)
DQN (Deep Q-Network)
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learning
 
An introduction to reinforcement learning
An introduction to  reinforcement learningAn introduction to  reinforcement learning
An introduction to reinforcement learning
 
Deep reinforcement learning from scratch
Deep reinforcement learning from scratchDeep reinforcement learning from scratch
Deep reinforcement learning from scratch
 
Reinforcement learning
Reinforcement learning Reinforcement learning
Reinforcement learning
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Markov decision process
Markov decision processMarkov decision process
Markov decision process
 
Reinforcement Learning 5. Monte Carlo Methods
Reinforcement Learning 5. Monte Carlo MethodsReinforcement Learning 5. Monte Carlo Methods
Reinforcement Learning 5. Monte Carlo Methods
 
Reinforcement learning
Reinforcement  learningReinforcement  learning
Reinforcement learning
 
Introduction to Multi-armed Bandits
Introduction to Multi-armed BanditsIntroduction to Multi-armed Bandits
Introduction to Multi-armed Bandits
 
Introduction to Few shot learning
Introduction to Few shot learningIntroduction to Few shot learning
Introduction to Few shot learning
 
Introduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement LearningIntroduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement Learning
 
Rl chapter 1 introduction
Rl chapter 1 introductionRl chapter 1 introduction
Rl chapter 1 introduction
 

Similaire à Multi-armed Bandits

Planning and Learning with Tabular Methods
Planning and Learning with Tabular MethodsPlanning and Learning with Tabular Methods
Planning and Learning with Tabular MethodsDongmin Lee
 
Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search
Efficient Selectivity and Backup Operators in Monte-Carlo Tree SearchEfficient Selectivity and Backup Operators in Monte-Carlo Tree Search
Efficient Selectivity and Backup Operators in Monte-Carlo Tree SearchMelvin Zhang
 
Aprendizaje reforzado con swift
Aprendizaje reforzado con swiftAprendizaje reforzado con swift
Aprendizaje reforzado con swiftNSCoder Mexico
 
Multi-Armed Bandit: an algorithmic perspective
Multi-Armed Bandit: an algorithmic perspectiveMulti-Armed Bandit: an algorithmic perspective
Multi-Armed Bandit: an algorithmic perspectiveGabriele Sottocornola
 
Sequential and Diverse Recommendation with Long Tail
Sequential and Diverse Recommendation with Long TailSequential and Diverse Recommendation with Long Tail
Sequential and Diverse Recommendation with Long TailYejin Kim
 
Legal Analytics Course - Class 8 - Introduction to Random Forests and Ensembl...
Legal Analytics Course - Class 8 - Introduction to Random Forests and Ensembl...Legal Analytics Course - Class 8 - Introduction to Random Forests and Ensembl...
Legal Analytics Course - Class 8 - Introduction to Random Forests and Ensembl...Daniel Katz
 
從 Atari/AlphaGo/ChatGPT 談深度強化學習及通用人工智慧
從 Atari/AlphaGo/ChatGPT 談深度強化學習及通用人工智慧從 Atari/AlphaGo/ChatGPT 談深度強化學習及通用人工智慧
從 Atari/AlphaGo/ChatGPT 談深度強化學習及通用人工智慧Frank Fang Kuo Yu
 
Multi-Agent Reinforcement Learning
Multi-Agent Reinforcement LearningMulti-Agent Reinforcement Learning
Multi-Agent Reinforcement LearningSeolhokim
 
Fast Distributed Online Classification
Fast Distributed Online ClassificationFast Distributed Online Classification
Fast Distributed Online ClassificationPrasad Chalasani
 
How DeepMind Mastered The Game Of Go
How DeepMind Mastered The Game Of GoHow DeepMind Mastered The Game Of Go
How DeepMind Mastered The Game Of GoTim Riser
 
How to Use Agile to Move the Earth
How to Use Agile to Move the EarthHow to Use Agile to Move the Earth
How to Use Agile to Move the EarthRyan Martens
 
Google ai based personal assistant and i woring
Google ai based personal assistant and i woringGoogle ai based personal assistant and i woring
Google ai based personal assistant and i woringIdcIdk1
 
Summer internship 2014 report by Rishabh Misra, Thapar University
Summer internship 2014 report by Rishabh Misra, Thapar UniversitySummer internship 2014 report by Rishabh Misra, Thapar University
Summer internship 2014 report by Rishabh Misra, Thapar UniversityRishabh Misra
 
Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)
Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)
Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)Sergey Karayev
 
reinforcement-learning-141009013546-conversion-gate02.pdf
reinforcement-learning-141009013546-conversion-gate02.pdfreinforcement-learning-141009013546-conversion-gate02.pdf
reinforcement-learning-141009013546-conversion-gate02.pdfVaishnavGhadge1
 
Reinforcement Learning in Practice: Contextual Bandits
Reinforcement Learning in Practice: Contextual BanditsReinforcement Learning in Practice: Contextual Bandits
Reinforcement Learning in Practice: Contextual BanditsMax Pagels
 
Say "Hi!" to Your New Boss
Say "Hi!" to Your New BossSay "Hi!" to Your New Boss
Say "Hi!" to Your New BossAndreas Dewes
 
Programming for Financial Strategies
Programming for Financial StrategiesProgramming for Financial Strategies
Programming for Financial StrategiesPrabhakar Verma
 

Similaire à Multi-armed Bandits (20)

Planning and Learning with Tabular Methods
Planning and Learning with Tabular MethodsPlanning and Learning with Tabular Methods
Planning and Learning with Tabular Methods
 
Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search
Efficient Selectivity and Backup Operators in Monte-Carlo Tree SearchEfficient Selectivity and Backup Operators in Monte-Carlo Tree Search
Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search
 
Aprendizaje reforzado con swift
Aprendizaje reforzado con swiftAprendizaje reforzado con swift
Aprendizaje reforzado con swift
 
Multi-Armed Bandit: an algorithmic perspective
Multi-Armed Bandit: an algorithmic perspectiveMulti-Armed Bandit: an algorithmic perspective
Multi-Armed Bandit: an algorithmic perspective
 
Sequential and Diverse Recommendation with Long Tail
Sequential and Diverse Recommendation with Long TailSequential and Diverse Recommendation with Long Tail
Sequential and Diverse Recommendation with Long Tail
 
Fast Distributed Online Classification
Fast Distributed Online Classification Fast Distributed Online Classification
Fast Distributed Online Classification
 
Legal Analytics Course - Class 8 - Introduction to Random Forests and Ensembl...
Legal Analytics Course - Class 8 - Introduction to Random Forests and Ensembl...Legal Analytics Course - Class 8 - Introduction to Random Forests and Ensembl...
Legal Analytics Course - Class 8 - Introduction to Random Forests and Ensembl...
 
從 Atari/AlphaGo/ChatGPT 談深度強化學習及通用人工智慧
從 Atari/AlphaGo/ChatGPT 談深度強化學習及通用人工智慧從 Atari/AlphaGo/ChatGPT 談深度強化學習及通用人工智慧
從 Atari/AlphaGo/ChatGPT 談深度強化學習及通用人工智慧
 
Multi-Agent Reinforcement Learning
Multi-Agent Reinforcement LearningMulti-Agent Reinforcement Learning
Multi-Agent Reinforcement Learning
 
Fast Distributed Online Classification
Fast Distributed Online ClassificationFast Distributed Online Classification
Fast Distributed Online Classification
 
How DeepMind Mastered The Game Of Go
How DeepMind Mastered The Game Of GoHow DeepMind Mastered The Game Of Go
How DeepMind Mastered The Game Of Go
 
How to Use Agile to Move the Earth
How to Use Agile to Move the EarthHow to Use Agile to Move the Earth
How to Use Agile to Move the Earth
 
Google ai based personal assistant and i woring
Google ai based personal assistant and i woringGoogle ai based personal assistant and i woring
Google ai based personal assistant and i woring
 
Summer internship 2014 report by Rishabh Misra, Thapar University
Summer internship 2014 report by Rishabh Misra, Thapar UniversitySummer internship 2014 report by Rishabh Misra, Thapar University
Summer internship 2014 report by Rishabh Misra, Thapar University
 
Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)
Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)
Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)
 
reinforcement-learning-141009013546-conversion-gate02.pdf
reinforcement-learning-141009013546-conversion-gate02.pdfreinforcement-learning-141009013546-conversion-gate02.pdf
reinforcement-learning-141009013546-conversion-gate02.pdf
 
Reinforcement Learning in Practice: Contextual Bandits
Reinforcement Learning in Practice: Contextual BanditsReinforcement Learning in Practice: Contextual Bandits
Reinforcement Learning in Practice: Contextual Bandits
 
Say "Hi!" to Your New Boss
Say "Hi!" to Your New BossSay "Hi!" to Your New Boss
Say "Hi!" to Your New Boss
 
Programming for Financial Strategies
Programming for Financial StrategiesProgramming for Financial Strategies
Programming for Financial Strategies
 
RL presentation
RL presentationRL presentation
RL presentation
 

Plus de Dongmin Lee

Causal Confusion in Imitation Learning
Causal Confusion in Imitation LearningCausal Confusion in Imitation Learning
Causal Confusion in Imitation LearningDongmin Lee
 
Character Controllers using Motion VAEs
Character Controllers using Motion VAEsCharacter Controllers using Motion VAEs
Character Controllers using Motion VAEsDongmin Lee
 
Causal Confusion in Imitation Learning
Causal Confusion in Imitation LearningCausal Confusion in Imitation Learning
Causal Confusion in Imitation LearningDongmin Lee
 
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...Dongmin Lee
 
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...Dongmin Lee
 
Exploration Strategies in Reinforcement Learning
Exploration Strategies in Reinforcement LearningExploration Strategies in Reinforcement Learning
Exploration Strategies in Reinforcement LearningDongmin Lee
 
Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)Dongmin Lee
 
Let's do Inverse RL
Let's do Inverse RLLet's do Inverse RL
Let's do Inverse RLDongmin Lee
 
모두를 위한 PG여행 가이드
모두를 위한 PG여행 가이드모두를 위한 PG여행 가이드
모두를 위한 PG여행 가이드Dongmin Lee
 
Safe Reinforcement Learning
Safe Reinforcement LearningSafe Reinforcement Learning
Safe Reinforcement LearningDongmin Lee
 
안.전.제.일. 강화학습!
안.전.제.일. 강화학습!안.전.제.일. 강화학습!
안.전.제.일. 강화학습!Dongmin Lee
 
강화학습 알고리즘의 흐름도 Part 2
강화학습 알고리즘의 흐름도 Part 2강화학습 알고리즘의 흐름도 Part 2
강화학습 알고리즘의 흐름도 Part 2Dongmin Lee
 
강화학습의 흐름도 Part 1
강화학습의 흐름도 Part 1강화학습의 흐름도 Part 1
강화학습의 흐름도 Part 1Dongmin Lee
 
강화학습의 개요
강화학습의 개요강화학습의 개요
강화학습의 개요Dongmin Lee
 

Plus de Dongmin Lee (14)

Causal Confusion in Imitation Learning
Causal Confusion in Imitation LearningCausal Confusion in Imitation Learning
Causal Confusion in Imitation Learning
 
Character Controllers using Motion VAEs
Character Controllers using Motion VAEsCharacter Controllers using Motion VAEs
Character Controllers using Motion VAEs
 
Causal Confusion in Imitation Learning
Causal Confusion in Imitation LearningCausal Confusion in Imitation Learning
Causal Confusion in Imitation Learning
 
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...
 
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...
 
Exploration Strategies in Reinforcement Learning
Exploration Strategies in Reinforcement LearningExploration Strategies in Reinforcement Learning
Exploration Strategies in Reinforcement Learning
 
Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)
 
Let's do Inverse RL
Let's do Inverse RLLet's do Inverse RL
Let's do Inverse RL
 
모두를 위한 PG여행 가이드
모두를 위한 PG여행 가이드모두를 위한 PG여행 가이드
모두를 위한 PG여행 가이드
 
Safe Reinforcement Learning
Safe Reinforcement LearningSafe Reinforcement Learning
Safe Reinforcement Learning
 
안.전.제.일. 강화학습!
안.전.제.일. 강화학습!안.전.제.일. 강화학습!
안.전.제.일. 강화학습!
 
강화학습 알고리즘의 흐름도 Part 2
강화학습 알고리즘의 흐름도 Part 2강화학습 알고리즘의 흐름도 Part 2
강화학습 알고리즘의 흐름도 Part 2
 
강화학습의 흐름도 Part 1
강화학습의 흐름도 Part 1강화학습의 흐름도 Part 1
강화학습의 흐름도 Part 1
 
강화학습의 개요
강화학습의 개요강화학습의 개요
강화학습의 개요
 

Dernier

Unveiling the Cannabis Plant’s Potential
Unveiling the Cannabis Plant’s PotentialUnveiling the Cannabis Plant’s Potential
Unveiling the Cannabis Plant’s PotentialMarkus Roggen
 
CHROMATOGRAPHY PALLAVI RAWAT.pptx
CHROMATOGRAPHY  PALLAVI RAWAT.pptxCHROMATOGRAPHY  PALLAVI RAWAT.pptx
CHROMATOGRAPHY PALLAVI RAWAT.pptxpallavirawat456
 
GenAI talk for Young at Wageningen University & Research (WUR) March 2024
GenAI talk for Young at Wageningen University & Research (WUR) March 2024GenAI talk for Young at Wageningen University & Research (WUR) March 2024
GenAI talk for Young at Wageningen University & Research (WUR) March 2024Jene van der Heide
 
DETECTION OF MUTATION BY CLB METHOD.pptx
DETECTION OF MUTATION BY CLB METHOD.pptxDETECTION OF MUTATION BY CLB METHOD.pptx
DETECTION OF MUTATION BY CLB METHOD.pptx201bo007
 
BACTERIAL SECRETION SYSTEM by Dr. Chayanika Das
BACTERIAL SECRETION SYSTEM by Dr. Chayanika DasBACTERIAL SECRETION SYSTEM by Dr. Chayanika Das
BACTERIAL SECRETION SYSTEM by Dr. Chayanika DasChayanika Das
 
Total Legal: A “Joint” Journey into the Chemistry of Cannabinoids
Total Legal: A “Joint” Journey into the Chemistry of CannabinoidsTotal Legal: A “Joint” Journey into the Chemistry of Cannabinoids
Total Legal: A “Joint” Journey into the Chemistry of CannabinoidsMarkus Roggen
 
6.1 Pests of Groundnut_Binomics_Identification_Dr.UPR
6.1 Pests of Groundnut_Binomics_Identification_Dr.UPR6.1 Pests of Groundnut_Binomics_Identification_Dr.UPR
6.1 Pests of Groundnut_Binomics_Identification_Dr.UPRPirithiRaju
 
Environmental acoustics- noise criteria.pptx
Environmental acoustics- noise criteria.pptxEnvironmental acoustics- noise criteria.pptx
Environmental acoustics- noise criteria.pptxpriyankatabhane
 
Timeless Cosmology: Towards a Geometric Origin of Cosmological Correlations
Timeless Cosmology: Towards a Geometric Origin of Cosmological CorrelationsTimeless Cosmology: Towards a Geometric Origin of Cosmological Correlations
Timeless Cosmology: Towards a Geometric Origin of Cosmological CorrelationsDanielBaumann11
 
The Sensory Organs, Anatomy and Function
The Sensory Organs, Anatomy and FunctionThe Sensory Organs, Anatomy and Function
The Sensory Organs, Anatomy and FunctionJadeNovelo1
 
KDIGO-2023-CKD-Guideline-Public-Review-Draft_5-July-2023.pdf
KDIGO-2023-CKD-Guideline-Public-Review-Draft_5-July-2023.pdfKDIGO-2023-CKD-Guideline-Public-Review-Draft_5-July-2023.pdf
KDIGO-2023-CKD-Guideline-Public-Review-Draft_5-July-2023.pdfGABYFIORELAMALPARTID1
 
GLYCOSIDES Classification Of GLYCOSIDES Chemical Tests Glycosides
GLYCOSIDES Classification Of GLYCOSIDES  Chemical Tests GlycosidesGLYCOSIDES Classification Of GLYCOSIDES  Chemical Tests Glycosides
GLYCOSIDES Classification Of GLYCOSIDES Chemical Tests GlycosidesNandakishor Bhaurao Deshmukh
 
Oxo-Acids of Halogens and their Salts.pptx
Oxo-Acids of Halogens and their Salts.pptxOxo-Acids of Halogens and their Salts.pptx
Oxo-Acids of Halogens and their Salts.pptxfarhanvvdk
 
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11GelineAvendao
 
Advances in AI-driven Image Recognition for Early Detection of Cancer
Advances in AI-driven Image Recognition for Early Detection of CancerAdvances in AI-driven Image Recognition for Early Detection of Cancer
Advances in AI-driven Image Recognition for Early Detection of CancerLuis Miguel Chong Chong
 
Science (Communication) and Wikipedia - Potentials and Pitfalls
Science (Communication) and Wikipedia - Potentials and PitfallsScience (Communication) and Wikipedia - Potentials and Pitfalls
Science (Communication) and Wikipedia - Potentials and PitfallsDobusch Leonhard
 
BACTERIAL DEFENSE SYSTEM by Dr. Chayanika Das
BACTERIAL DEFENSE SYSTEM by Dr. Chayanika DasBACTERIAL DEFENSE SYSTEM by Dr. Chayanika Das
BACTERIAL DEFENSE SYSTEM by Dr. Chayanika DasChayanika Das
 

Dernier (20)

AZOTOBACTER AS BIOFERILIZER.PPTX
AZOTOBACTER AS BIOFERILIZER.PPTXAZOTOBACTER AS BIOFERILIZER.PPTX
AZOTOBACTER AS BIOFERILIZER.PPTX
 
Unveiling the Cannabis Plant’s Potential
Unveiling the Cannabis Plant’s PotentialUnveiling the Cannabis Plant’s Potential
Unveiling the Cannabis Plant’s Potential
 
CHROMATOGRAPHY PALLAVI RAWAT.pptx
CHROMATOGRAPHY  PALLAVI RAWAT.pptxCHROMATOGRAPHY  PALLAVI RAWAT.pptx
CHROMATOGRAPHY PALLAVI RAWAT.pptx
 
GenAI talk for Young at Wageningen University & Research (WUR) March 2024
GenAI talk for Young at Wageningen University & Research (WUR) March 2024GenAI talk for Young at Wageningen University & Research (WUR) March 2024
GenAI talk for Young at Wageningen University & Research (WUR) March 2024
 
DETECTION OF MUTATION BY CLB METHOD.pptx
DETECTION OF MUTATION BY CLB METHOD.pptxDETECTION OF MUTATION BY CLB METHOD.pptx
DETECTION OF MUTATION BY CLB METHOD.pptx
 
BACTERIAL SECRETION SYSTEM by Dr. Chayanika Das
BACTERIAL SECRETION SYSTEM by Dr. Chayanika DasBACTERIAL SECRETION SYSTEM by Dr. Chayanika Das
BACTERIAL SECRETION SYSTEM by Dr. Chayanika Das
 
Total Legal: A “Joint” Journey into the Chemistry of Cannabinoids
Total Legal: A “Joint” Journey into the Chemistry of CannabinoidsTotal Legal: A “Joint” Journey into the Chemistry of Cannabinoids
Total Legal: A “Joint” Journey into the Chemistry of Cannabinoids
 
6.1 Pests of Groundnut_Binomics_Identification_Dr.UPR
6.1 Pests of Groundnut_Binomics_Identification_Dr.UPR6.1 Pests of Groundnut_Binomics_Identification_Dr.UPR
6.1 Pests of Groundnut_Binomics_Identification_Dr.UPR
 
Environmental acoustics- noise criteria.pptx
Environmental acoustics- noise criteria.pptxEnvironmental acoustics- noise criteria.pptx
Environmental acoustics- noise criteria.pptx
 
Timeless Cosmology: Towards a Geometric Origin of Cosmological Correlations
Timeless Cosmology: Towards a Geometric Origin of Cosmological CorrelationsTimeless Cosmology: Towards a Geometric Origin of Cosmological Correlations
Timeless Cosmology: Towards a Geometric Origin of Cosmological Correlations
 
The Sensory Organs, Anatomy and Function
The Sensory Organs, Anatomy and FunctionThe Sensory Organs, Anatomy and Function
The Sensory Organs, Anatomy and Function
 
KDIGO-2023-CKD-Guideline-Public-Review-Draft_5-July-2023.pdf
KDIGO-2023-CKD-Guideline-Public-Review-Draft_5-July-2023.pdfKDIGO-2023-CKD-Guideline-Public-Review-Draft_5-July-2023.pdf
KDIGO-2023-CKD-Guideline-Public-Review-Draft_5-July-2023.pdf
 
GLYCOSIDES Classification Of GLYCOSIDES Chemical Tests Glycosides
GLYCOSIDES Classification Of GLYCOSIDES  Chemical Tests GlycosidesGLYCOSIDES Classification Of GLYCOSIDES  Chemical Tests Glycosides
GLYCOSIDES Classification Of GLYCOSIDES Chemical Tests Glycosides
 
Interferons.pptx.
Interferons.pptx.Interferons.pptx.
Interferons.pptx.
 
Oxo-Acids of Halogens and their Salts.pptx
Oxo-Acids of Halogens and their Salts.pptxOxo-Acids of Halogens and their Salts.pptx
Oxo-Acids of Halogens and their Salts.pptx
 
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
 
Advances in AI-driven Image Recognition for Early Detection of Cancer
Advances in AI-driven Image Recognition for Early Detection of CancerAdvances in AI-driven Image Recognition for Early Detection of Cancer
Advances in AI-driven Image Recognition for Early Detection of Cancer
 
Science (Communication) and Wikipedia - Potentials and Pitfalls
Science (Communication) and Wikipedia - Potentials and PitfallsScience (Communication) and Wikipedia - Potentials and Pitfalls
Science (Communication) and Wikipedia - Potentials and Pitfalls
 
BACTERIAL DEFENSE SYSTEM by Dr. Chayanika Das
BACTERIAL DEFENSE SYSTEM by Dr. Chayanika DasBACTERIAL DEFENSE SYSTEM by Dr. Chayanika Das
BACTERIAL DEFENSE SYSTEM by Dr. Chayanika Das
 
Introduction Classification Of Alkaloids
Introduction Classification Of AlkaloidsIntroduction Classification Of Alkaloids
Introduction Classification Of Alkaloids
 

Multi-armed Bandits

  • 2. Reference - Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto (Link) - 멀티 암드 밴딧(Multi-Armed Bandits), 송호연 (https://brunch.co.kr/@chris-song/62)
  • 3. Index 1. Why do you need to know Multi-armed Bandits? 2. A k-armed Bandit Problem 3. Simple-average Action-value Methods 4. A simple Bandit Algorithm 5. Gradient Bandit Algorithm 6. Summary
  • 4. 1. Why do you need to know Multi-armed Bandits(MAB)?
  • 5. 1. Why do you need to know MAB? - Reinforcement Learning(RL) uses training information that ‘Evaluate’(‘Instruct’ X) the actions - Evaluative feedback indicates ‘How good the action taken was’ - Because of simplicity, ‘Nonassociative’ one situation - Most prior work involving evaluative feedback - ‘Nonassociative’, ‘Evaluative feedback’ -> MAB - In order to Introduce basic learning methods in later chapters
  • 6. 1. Why do you need to know MAB? In my opinion, - I think we can’t seem to know RL without knowing MAB. - MAB deal with ‘Exploitation & Exploration’ of the core ideas in RL. - In the full reinforcement learning problem, MAB is always used. - In every profession, MAB is very useful.
  • 7. 2. A k-armed Bandit Problem
  • 8. 2. A k-armed Bandit Problem Do you know what MAB is?
  • 9. Do you know what MAB is? Source : Multi-Armed Bandit Image source : https://brunch.co.kr/@chris-song/62 - Slot Machine -> Bandit - Slot Machine’s lever -> Armed - N slot Machine  Multi-armed Bandits
  • 10. Do you know what MAB is? Source : Multi-Armed Bandit Image source : https://brunch.co.kr/@chris-song/62 Among the various slot machines, which slot machine should I put my money on and lower the lever?
  • 11. Do you know what MAB is? Source : Multi-Armed Bandit Image source : https://brunch.co.kr/@chris-song/62 How can you make the best return on your investment?
  • 12. Do you know what MAB is? Source : Multi-Armed Bandit Image source : https://brunch.co.kr/@chris-song/62 MAB is a algorithm created to optimize investment in slot machines
  • 13. 2. A k-armed Bandit Problem A K-armed Bandit Problem
  • 14. A K-armed Bandit Problem Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view t – Discrete time step or play number 𝐴 𝑡 - Action at time t 𝑅𝑡 - Reward at time t 𝑞∗ 𝑎 – True value (expected reward) of action a
  • 15. A K-armed Bandit Problem Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view In our k-armed bandit problem, each of the k actions has an expected or mean reward given that that action is selected; let us call this the value of that action.
  • 17. 3. Simple-average Action-value Methods Simple-average Method
  • 18. Simple-average Method Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view 𝑄𝑡(𝑎) converges to 𝑞∗(𝑎)
  • 19. 3. Simple-average Action-value Methods Action-value Methods
  • 20. Action-value Methods Action-value Methods - Greedy Action Selection Method - 𝜀-greedy Action Selection Method - Upper-Confidence-Bound(UCB) Action Selection Method
  • 21. Action-value Methods Action-value Methods - Greedy Action Selection Method - 𝜀-greedy Action Selection Method - Upper-Confidence-Bound(UCB) Action Selection Method
  • 22. Greedy Action Selection Method 𝑎𝑟𝑔𝑚𝑎𝑥 𝑎 𝑓(𝑎) - a value of 𝑎 at which 𝑓(𝑎) takes its maximal value Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 23. Greedy Action Selection Method Greedy action selection always exploits current knowledge to maximize immediate reward Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 24. Greedy Action Selection Method Greedy Action Selection Method’s disadvantage Is it a good idea to select greedy action, exploit that action selection and maximize the current immediate reward?
  • 25. Action-value Methods Action-value Methods - Greedy Action Selection Method - 𝜀-greedy Action Selection Method - Upper-Confidence-Bound(UCB) Action Selection Method
  • 26. 𝜀-greedy Action Selection Method Exploitation is the right thing to do to maximize the expected reward on the one step, but Exploration may produce the greater total reward in the long run.
  • 27. 𝜀-greedy Action Selection Method Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view 𝜀 – probability of taking a random action in an 𝜀-greedy policy
  • 28. 𝜀-greedy Action Selection Method Exploitation Exploration Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 29. Action-value Methods Action-value Methods - Greedy Action Selection Method - 𝜀-greedy Action Selection Method - Upper-Confidence-Bound(UCB) Action Selection Method
  • 30. Upper-Confidence-Bound(UCB) Action Selection Method ln 𝑡 - natural logarithm of 𝑡 𝑁𝑡(𝑎) – the number of times that action 𝑎 has been selected prior to time 𝑡 𝑐 – the number 𝑐 > 0 controls the degree of exploration Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 31. Upper-Confidence-Bound(UCB) Action Selection Method Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view The idea of this UCB action selection is that The square-root term is a measure of the uncertainty(or potential) in the estimate of 𝑎’s value The probability that the slot machine may be the optimal slot machine
  • 32. Upper-Confidence-Bound(UCB) Action Selection Method UCB Action Selection Method’s disadvantage UCB is more difficult than 𝜀-greedy to extend beyond bandits to the more general reinforcement learning settings One difficulty is in dealing with nonstationary problems Another difficulty is dealing with large state spaces Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 33. 4. A simple Bandit Algorithm
  • 34. 4. A simple Bandit Algorithm - Incremental Implementation - Tracking a Nonstationary Problem
  • 35. 4. A simple Bandit Algorithm - Incremental Implementation - Tracking a Nonstationary Problem
  • 36. Incremental Implementation 𝑄 𝑛 denote the estimate of 𝑅 𝑛−1‘s action value after 𝑅 𝑛−1 has been selected n − 1 times Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 37. Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 38. Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view 𝑄 𝑛+1 = 𝑄 𝑛 + 1 𝑛 𝑅 𝑛 − 𝑄 𝑛 holds even for 𝑛 = 1, obtaining 𝑄2 = 𝑅1 for arbitrary 𝑄1
  • 39. Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 40. Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 41. Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view Unstable(↔Constant) Available on stationary problem
  • 42. Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 43. Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view The expression 𝑇𝑎𝑟𝑔𝑒𝑡 − 𝑂𝑙𝑑𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒 is an error in the estimate. The target is presumed to indicate a desirable direction in which to move, though it may be noisy.
  • 44. 4. A simple Bandit Algorithm - Incremental Implementation - Tracking a Nonstationary Problem
  • 45. Tracking a Nonstationary Problem Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 46. Tracking a Nonstationary Problem Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view Why do you think it should be changed from 1 𝑛 to 𝛼?
  • 47. Tracking a Nonstationary Problem Why do you think it should be changed from 1 𝑛 to 𝛼? We often encounter RL problems that are effectively nonstationary. In such cases it makes sense to give more weight to recent rewards than to long-past rewards. One of the most popular ways of doing this is to use a constant step-size parameter. The step-size parameter 𝛼 ∈ (0,1] is constant.
  • 48. Tracking a Nonstationary Problem Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 49. Tracking a Nonstationary Problem ? = 𝑄 𝑛 + 𝛼𝑅 𝑛 − 𝛼𝑄 𝑛 = 𝛼𝑅 𝑛 + (1 − 𝛼)𝑄 𝑛 𝑄 𝑛 = 𝛼𝑅 𝑛−1 + 1 − 𝛼 𝑄 𝑛−1 ? ∴ 𝑄 𝑛 = 𝛼𝑅 𝑛−1 + (1 − 𝛼)𝑄 𝑛−1 Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 50. Tracking a Nonstationary Problem ? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 51. Tracking a Nonstationary Problem ∴ 𝑄 𝑛−1 = 𝛼 𝑅 𝑛−2 + 1 − 𝛼 𝑅 𝑛−3 + ⋯ + 1 − 𝛼 𝑛−3 𝑅1 + 1 − 𝛼 𝑛−2 𝑄1 1 − 𝛼 2 𝑄 𝑛−1 = 1 − 𝛼 2 𝛼𝑅 𝑛−2 + 1 − 𝛼 3 𝛼𝑅 𝑛−3 + ⋯ + 1 − 𝛼 𝑛−1 𝛼𝑅1 + 1 − 𝛼 𝑛 𝑄1 ? = 1 − 𝛼 2{𝛼𝑅 𝑛−2 + 1 − 𝛼 𝛼𝑅 𝑛−3 + ⋯ + 1 − 𝛼 𝑛−3 𝛼𝑅1 + 1 − 𝛼 𝑛−2 𝑄1}
  • 52. Tracking a Nonstationary Problem Sequences of step-size parameters often converge very slowly or need considerable tuning in order to obtain a satisfactory convergence rate. Thus, step-size parameters should be tuned effectively. Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 53. 5. Gradient Bandit Algorithm
  • 54. 5. Gradient Bandit Algorithm In addition to a simple bandit algorithm, there is another way to use the gradient method as a bandit algorithm
  • 55. 5. Gradient Bandit Algorithm We consider learning a numerical 𝑝𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 for each action 𝑎, which we denote 𝐻𝑡(𝑎). The larger the preference, the more often that action is taken, but the preference has no interpretation in terms of reward. In other wards, just because the preference(𝐻𝑡(𝑎)) is large, the reward is not unconditionally large. However, if the reward is large, It can affect the preference(𝐻𝑡(𝑎))
  • 56. 5. Gradient Bandit Algorithm The action probabilities are determined according to a 𝑠𝑜𝑓𝑡 − 𝑚𝑎𝑥 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 (i.e., Gibbs or Boltzmann distribution) Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 57. 5. Gradient Bandit Algorithm 𝜋 𝑡(𝑎) – Probability of selecting action 𝑎 at time 𝑡 Initially all action preferences are the same so that all actions have an equal probability of being selected. Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 58. 5. Gradient Bandit Algorithm Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view There is a natural learning algorithm for this setting based on the idea of stochastic gradient ascent. On each step, after selecting action 𝐴 𝑡 and receiving the reward 𝑅𝑡, the action preferences are updated. Selected action 𝐴 𝑡 Non-selected actions
  • 59. 5. Gradient Bandit Algorithm Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view 1) What does 𝑅𝑡 mean? 2) What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean?
  • 60. 5. Gradient Bandit Algorithm Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view 1) What does 𝑅𝑡 mean? 2) What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? ?
  • 61. What does 𝑅𝑡 mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view 𝑅𝑡 ∈ R is the average of all the rewards. The 𝑅𝑡 term serves as a baseline. If the reward is higher than the baseline, then the probability of taking 𝐴 𝑡 in the future is increased, and if the reward is below baseline, then probability is decreased. The non-selected actions move in the opposite direction.
  • 62. 5. Gradient Bandit Algorithm Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view 1) What does 𝑅𝑡 mean? 2) What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? ?
  • 63. What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view - Stochastic approximation to gradient ascent in Bandit Gradient Algorithm - Expected reward - Expected reward by Low of total expectationE[𝑅𝑡] = E[ E 𝑅𝑡 𝐴 𝑡 ]
  • 64. What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 65. What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view ?
  • 66. What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view ? The gradient sums to zero over all the actions, σ 𝑥 𝜕𝜋 𝑡(𝑥) 𝜕𝐻𝑡(𝑎) = 0 – as 𝐻𝑡(𝑎) is changed, some actions’ probabilities go up and some go down, but the sum of the changes must be zero because the sum of the probabilities is always one.
  • 67. What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 68. What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view ? E[𝑅𝑡] = E[ E 𝑅𝑡 𝐴 𝑡 ]
  • 69. What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view Please refer page 40 in link of reference slide
  • 70. What does (𝑅𝑡 − 𝑅𝑡)(1 − 𝜋 𝑡 𝐴 𝑡 ) mean? Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view ∴ We can substitute a sample of the expectation above for the performance gradient in 𝐻𝑡+1 𝑎 ≅ 𝐻𝑡 𝑎 + 𝛼 𝜕E[𝑅 𝑡] 𝜕𝐻𝑡(𝑎)
  • 72. 6. Summary In this chapter, ‘Exploitation & Exploration‘ is the core idea.
  • 73. 6. Summary Action-value Methods - Greedy Action Selection Method - 𝜀-greedy Action Selection Method - Upper-Confidence-Bound(UCB) Action Selection Method
  • 74. 6. Summary A simple bandit algorithm : Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 75. 6. Summary A simple bandit algorithm : Incremental Implementation Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 76. 6. Summary A simple bandit algorithm : Tracking a Nonstationary Problem Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 77. 6. Summary A simple bandit algorithm : Tracking a Nonstationary Problem Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view
  • 78. 6. Summary Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view Gradient Bandit Algorithm
  • 79. 6. Summary Source : Reinforcement Learning : An Introduction, Richard S. Sutton & Andrew G. Barto Image source : https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view A parameter study of the various bandit algorithms