Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Introduction to Algorithms for Behavior Based
Recommendation
Tokyo Web Mining Meetup
March 26, 2016
Kimikazu Kato
Silver E...
About myself
加藤公一 Kimikazu Kato
Twitter: @hamukazu
LinkedIn: http://linkedin.com/in/kimikazukato
Chief Scientist at Silver...
About our company
Silver Egg Technology
Established: 1998
CEO: Tom Foley
Main Service: Recommendation System, Online Adver...
Table of Contents
Introduction
Types of recommendation
Evaluation metrics
Algorithms
Conclusion
4 / 36
Caution
This presentation includes:
State-of-the-art algorithms for recommendation systems,
But does NOT include:
Any info...
Recommendation System
Recommender systems or recommendation systems (sometimes
replacing "system" with a synonym such as p...
Rating Prediction Problem
usermovie W X Y Z
A 5 4 1 4
B 4
C 2 3
D 1 4 ?
Given rating information for some user/movie pairs...
Item Prediction Problem
useritem W X Y Z
A 1 1 1 1
B 1
C 1
D 1 ? 1 ?
Given "who bought what" information (user/item pairs)...
Input/Output of the systems
Rating Prediction
Input: set of ratings for user/item pairs
Output: map from user/item pair to...
Evaluation Metrics for Recommendation
Systems
Rating prediction
The Root of the Mean Squared Error (RMSE)
The square root ...
RMSE of Rating Prediction
Some user/item pairs are randomly chosen to be hidden.
usermovie W X Y Z
A 5 4 1 4
B 4
C 2 3
D 1...
Precision/Recall of Item Prediction
If three items are recommended:
2 out of 3 recommended items are actually bought: the ...
ROC and AUC
# of
recom.
1 2 3 4 5 6 7 8 9 10
# of
whites
1 1 1 2 2 3 4 5 5 6
# of
blacks
0 1 2 2 3 3 3 3 4 4
Divide the fi...
This curve is called "ROC curve." The area under this curve is called "AUC."
Higher AUC is better (max =1).
The AUC is oft...
Netflix Prize
The Netflix Prize was an open competition for the best collaborative
filtering algorithm to predict user rat...
Outline of Winner's Algorithm
Refer to the blog by E.Chen.
http://blog.echen.me/2011/10/24/winning-the-netflix-prize-a-sum...
Notations
Number of users:
Set of users:
Number of items (movies):
Set of items (movies):
Input matrix: ( matrix)
n
U = {1...
Matrix Factorization
Based on the assumption that each item is described by a small number of
latent factors
Each rating i...
Find and maximize
p (A|X, Y , σ) = N ( | , σ)∏
≠0aui
Aui X
T
u Yi
p(X| ) = N ( |0, I)σX ∏
u
Xu σX
p(Y | ) = N ( |0, I)σY ∏...
According to Bayes' Theorem,
Thus,
where means Frobenius norm.
How can this be computed? Use MCMC. See [Salakhutdinov et a...
Rating
usermovie W X Y Z
A 5 4 1 4
B 4
C 2 3
D 1 4 ?
Includes negative feedback
"1" means "boring"
Zero means "unknown"
Sh...
Solutions
Adding a constraint to the optimization problem
Changing the objective function itself
22 / 36
Adding a Constraint
The problem has the too much degree of freedom
Desirable characteristic is that many elements of the p...
One-class Matrix Completion
[Sindhwani et al., 2010]
Introduced variables to relax the problem.
Minimize
subject to
pui
( ...
Intuitive explanation:
means how likely the -element is zero.
The second term is the error of estimation considering 's.
T...
Implicit Sparseness constraint: SLIM (Elastic Net)
In the regression model, adding L1 term makes the solution sparse:
The ...
Ranking prediction
Another strategy of shopping prediction
"Learn from the order" approach
Predict whether X is more likel...
Bayesian Probabilistic Ranking
[Rendle et al., 2009]
Consider matrix factorization model, but the update of elements is
ac...
Let
and define
where we assume
According to Bayes' theorem, the function to be optimized becomes:
= {(u, i, j) ∈ U × I × I...
Taking log of this,
Now consider the following problem:
This means "find a pair of matrices which preserve the order of th...
Computation
The function we want to optimize:
is huge, so in practice, a stochastic method is necessary.
Let the parameter...
MyMediaLite
http://www.mymedialite.net/
Open source implemetation of recommendation systems
Written in C#
Reasonable compu...
Practical Aspect of Recommendation
Problem
Computational time
Memory consumption
How many services can be integrated in a ...
Concluding Remarks: What is Important for
Good Prediction?
Theory
Machine learning
Mathematical optimization
Implementatio...
References (1/2)
For beginers
比戸ら, データサイエンティスト養成読本 機械学習入門編, 技術評論社, 2016
T.Segaran. Programming Collective Intelligence, O'...
References (2/2)
Papers
Salakhutdinov, Ruslan, and Andriy Mnih. "Bayesian probabilistic matrix
factorization using Markov ...
Prochain SlideShare
Chargement dans…5
×

Introduction to behavior based recommendation system

6 163 vues

Publié le

Material presented at Tokyo Web Mining Meetup, March 26, 2016.

The source code is here:
https://github.com/hamukazu/tokyo.webmining.2016-03-26

東京ウェブマイニング(2016年3月27)の発表資料です。すべて英語です。

Publié dans : Technologie
  • Soyez le premier à commenter

Introduction to behavior based recommendation system

  1. 1. Introduction to Algorithms for Behavior Based Recommendation Tokyo Web Mining Meetup March 26, 2016 Kimikazu Kato Silver Egg Technology Co., Ltd. 1 / 36
  2. 2. About myself 加藤公一 Kimikazu Kato Twitter: @hamukazu LinkedIn: http://linkedin.com/in/kimikazukato Chief Scientist at Silver Egg Technology Ph.D in computer science, Master's degree in mathematics Experience in numerical computation and mathematical algorithms especially ... Geometric computation, computer graphics Partial differential equation, parallel computation, GPGPU Mathematical programming Now specialize in Machine learning, especially, recommendation system 2 / 36
  3. 3. About our company Silver Egg Technology Established: 1998 CEO: Tom Foley Main Service: Recommendation System, Online Advertisement Major Clients: QVC, Senshukai (Bellemaison), Tsutaya We provide a recommendation system to Japan's leading web sites. 3 / 36
  4. 4. Table of Contents Introduction Types of recommendation Evaluation metrics Algorithms Conclusion 4 / 36
  5. 5. Caution This presentation includes: State-of-the-art algorithms for recommendation systems, But does NOT include: Any information about the core algorithm in Silver Egg Technology 5 / 36
  6. 6. Recommendation System Recommender systems or recommendation systems (sometimes replacing "system" with a synonym such as platform or engine) are a subclass of information filtering system that seek to predict the 'rating' or 'preference' that user would give to an item. — Wikipedia In this talk, we focus on collaborative filtering method, which only utilize users' behavior, activity, and preference. Other methods include: Content-based methods Method using demographic data Hybrid 6 / 36
  7. 7. Rating Prediction Problem usermovie W X Y Z A 5 4 1 4 B 4 C 2 3 D 1 4 ? Given rating information for some user/movie pairs, Want to predict a rating for an unknown user/movie pair. 7 / 36
  8. 8. Item Prediction Problem useritem W X Y Z A 1 1 1 1 B 1 C 1 D 1 ? 1 ? Given "who bought what" information (user/item pairs), Want to predict which item is likely to be bought by a user. 8 / 36
  9. 9. Input/Output of the systems Rating Prediction Input: set of ratings for user/item pairs Output: map from user/item pair to predicted rating Item Prediction Input: set of user/item pairs as shopping data, integer Output: top items for each user which are most likely to be bought by him/her k k 9 / 36
  10. 10. Evaluation Metrics for Recommendation Systems Rating prediction The Root of the Mean Squared Error (RMSE) The square root of the sum of squared errors Item prediction Precision (# of Recommended and Purchased)/(# of Recommended) Recall (# of Recommended and Purchased)/(# of Purchased) 10 / 36
  11. 11. RMSE of Rating Prediction Some user/item pairs are randomly chosen to be hidden. usermovie W X Y Z A 5 4 1 4 B 4 C 2 3 D 1 4 ? Predicted as 3.1 but the actual is 4, then the squared error is . Take the sum over the error over all the hidden items and then, take the square root of it. |3.1 − 4 =| 2 0.9 2 ( −∑ (u,i)∈hidden predictedui actualui ) 2 − −−−−−−−−−−−−−−−−−−−−−−−−− √ 11 / 36
  12. 12. Precision/Recall of Item Prediction If three items are recommended: 2 out of 3 recommended items are actually bought: the precision is 2/3. 2 out of 4 bought items are recommended: the recall is 2/4. These are denoted by recall@3 and prec@3. Ex. recall@5 = 3/5, prec@5 = 3/4 12 / 36
  13. 13. ROC and AUC # of recom. 1 2 3 4 5 6 7 8 9 10 # of whites 1 1 1 2 2 3 4 5 5 6 # of blacks 0 1 2 2 3 3 3 3 4 4 Divide the first and second row by total number of white and blacks respectively, and plot the values in xy plane. 13 / 36
  14. 14. This curve is called "ROC curve." The area under this curve is called "AUC." Higher AUC is better (max =1). The AUC is often used in academia, but for a practical purpose... 14 / 36
  15. 15. Netflix Prize The Netflix Prize was an open competition for the best collaborative filtering algorithm to predict user ratings for films, based on previous ratings without any other information about the users or films, i.e. without the users or the films being identified except by numbers assigned for the contest. — Wikipedia Shortly, an open competition for preference prediction. Closed in 2009. 15 / 36
  16. 16. Outline of Winner's Algorithm Refer to the blog by E.Chen. http://blog.echen.me/2011/10/24/winning-the-netflix-prize-a-summary/ Digest of the methods: Neighborhood Method Matrix Factorization Restricted Boltzmann Machines Regression Regularization Ensemble Methods 16 / 36
  17. 17. Notations Number of users: Set of users: Number of items (movies): Set of items (movies): Input matrix: ( matrix) n U = {1, 2, … , n} m I = {1, 2, … , m} A n × m 17 / 36
  18. 18. Matrix Factorization Based on the assumption that each item is described by a small number of latent factors Each rating is expressed as a linear combination of the latent factors Achieve good performance in Netflix Prize Find such matrices , where A ≈ YX T X ∈ Mat(f, n) Y ∈ Mat(f, m) f ≪ n, m 18 / 36
  19. 19. Find and maximize p (A|X, Y , σ) = N ( | , σ)∏ ≠0aui Aui X T u Yi p(X| ) = N ( |0, I)σX ∏ u Xu σX p(Y | ) = N ( |0, I)σY ∏ i Yi σY X Y p (X, Y |A, σ) 19 / 36
  20. 20. According to Bayes' Theorem, Thus, where means Frobenius norm. How can this be computed? Use MCMC. See [Salakhutdinov et al., 2008]. Once and are determined, and the prediction for is estimated by p (X, Y |A, σ) = p(A|X, Y , σ)p(X| )p(X| ) × const.σX σX log p (U , V |A, σ, , )σU σV = + ∥X + ∥Y + const.∑ Aui ( − )Aui X T u Yi 2 λX ∥ 2 Fro λY ∥ 2 Fro ∥ ⋅ ∥Fro X Y := YA ~ X T Aui A ~ ui 20 / 36
  21. 21. Rating usermovie W X Y Z A 5 4 1 4 B 4 C 2 3 D 1 4 ? Includes negative feedback "1" means "boring" Zero means "unknown" Shopping (Browsing) useritem W X Y Z A 1 1 1 1 B 1 C 1 D 1 ? 1 ? Includes no negative feedback Zero means "unknown" or "negative" More degree of the freedom Difference between Rating and Shopping Consequently, the algorithm effective for the rating matrix is not necessarily effective for the shopping matrix. 21 / 36
  22. 22. Solutions Adding a constraint to the optimization problem Changing the objective function itself 22 / 36
  23. 23. Adding a Constraint The problem has the too much degree of freedom Desirable characteristic is that many elements of the product should be zero. Assume that a certain ratio of zero elements of the input matrix remains zero after the optimization [Sindhwani et al., 2010] Experimentally outperform the "zero-as-negative" method 23 / 36
  24. 24. One-class Matrix Completion [Sindhwani et al., 2010] Introduced variables to relax the problem. Minimize subject to pui ( − ) + ∥X + ∥Y∑ ≠0Aui Aui X T u Yi λX ∥ 2 Fro λY ∥ 2 Fro + [ (0 − + (1 − )(1 − ]∑ =0Aui pui X T u Yi ) 2 pui X T u Yi ) 2 + T [− log − (1 − ) log(1 − )]∑ =0Aui pui pui pui pui = r 1 |{ | = 0}|Aui Aui ∑ =0Aui pui 24 / 36
  25. 25. Intuitive explanation: means how likely the -element is zero. The second term is the error of estimation considering 's. The third term is the entropy of the distribution. ( − ) + ∥X + ∥Y∑ ≠0Aui Aui X T u Yi λX ∥ 2 Fro λY ∥ 2 Fro + [ (0 − + (1 − )(1 − ]∑ =0Aui pui X T u Yi ) 2 pui X T u Yi ) 2 + T [− log − (1 − ) log(1 − )]∑ =0Aui pui pui pui pui pui (u, i) pui 25 / 36
  26. 26. Implicit Sparseness constraint: SLIM (Elastic Net) In the regression model, adding L1 term makes the solution sparse: The similar idea is used for the matrix factorization [Ning et al., 2011]: Minimize subject to [ ∥Xw − y + ∥w + λρ|w ]min w 1 2n ∥ 2 2 λ(1 − ρ) 2 ∥ 2 2 |1 ∥A − AW ∥ + ∥W + λρ|W λ(1 − ρ) 2 ∥ 2 Fro |1 diag W = 0 26 / 36
  27. 27. Ranking prediction Another strategy of shopping prediction "Learn from the order" approach Predict whether X is more likely to be bought than Y, rather than the probability for X or Y. 27 / 36
  28. 28. Bayesian Probabilistic Ranking [Rendle et al., 2009] Consider matrix factorization model, but the update of elements is according to the observation of the "orders" The parameters are the same as usual matrix factorization, but the objective function is different Consider a total order for each . Suppose that means "the user is more likely to buy than . The objective is to calculate such that and (which means and are not bought by ). >u u ∈ U i j(i, j ∈ I)>u u i j p(i j)>u = 0Aui Auj i j u 28 / 36
  29. 29. Let and define where we assume According to Bayes' theorem, the function to be optimized becomes: = {(u, i, j) ∈ U × I × I| = 1, = 0} ,DA Aui Auj p( |X, Y ) := p(i j|X, Y )∏ u∈U >u ∏ (u,i,j)∈DA >u p(i j|X, Y )>u σ(x) = σ( − )X T u Yi Xu Yj = 1 1 + e −x ∏ p(X, Y | ) = ∏ p( |X, Y ) × p(X)p(Y ) × const.>u >u 29 / 36
  30. 30. Taking log of this, Now consider the following problem: This means "find a pair of matrices which preserve the order of the element of the input matrix for each ." L := log[∏ p( |X, Y ) × p(X)p(Y )]>u = log p(i j|X, Y ) − ∥X − ∥Y∏ (u,i,j)∈DA >u λX ∥ 2 Fro λY ∥ 2 Fro = log σ( − ) − ∥X − ∥Y∑ (u,i,j)∈DA X T u Yi X T u Yj λX ∥ 2 Fro λY ∥ 2 Fro [ log σ( − ) − ∥X − ∥Y ]max X,Y ∑ (u,i,j)∈DA X T u Yi X T u Yj λX ∥ 2 Fro λY ∥ 2 Fro X, Y u 30 / 36
  31. 31. Computation The function we want to optimize: is huge, so in practice, a stochastic method is necessary. Let the parameters be . The algorithm is the following: Repeat the following Choose randomly Update with This method is called Stochastic Gradient Descent (SGD). log σ( − ) − ∥X − ∥Y∑ (u,i,j)∈DA X T u Yi X T u Yj λX ∥ 2 Fro λY ∥ 2 Fro U × I × I Θ = (X, Y ) (u, i, j) ∈ DA Θ Θ = Θ − α (log σ( − ) − ∥X − ∥Y ) ∂ ∂Θ X T u Yi X T u Yj λX ∥ 2 Fro λY ∥ 2 Fro 31 / 36
  32. 32. MyMediaLite http://www.mymedialite.net/ Open source implemetation of recommendation systems Written in C# Reasonable computation time Supports rating and item prediction 32 / 36
  33. 33. Practical Aspect of Recommendation Problem Computational time Memory consumption How many services can be integrated in a server rack? Super high accuracy with a super computer is useless for real business 33 / 36
  34. 34. Concluding Remarks: What is Important for Good Prediction? Theory Machine learning Mathematical optimization Implementation Algorithms Computer architecture Mathematics Human factors! Hand tuning of parameters Domain specific knowledge 34 / 36
  35. 35. References (1/2) For beginers 比戸ら, データサイエンティスト養成読本 機械学習入門編, 技術評論社, 2016 T.Segaran. Programming Collective Intelligence, O'Reilly Media, 2007. E.Chen. Winning the Netflix Prize: A Summary. A.Gunawardana and G.Shani. A Survey of Accuracy Evaluation Metrics of Recommendation Tasks, The Journal of Machine Learning Research, Volume 10, 2009. 35 / 36
  36. 36. References (2/2) Papers Salakhutdinov, Ruslan, and Andriy Mnih. "Bayesian probabilistic matrix factorization using Markov chain Monte Carlo." Proceedings of the 25th international conference on Machine learning. ACM, 2008. Sindhwani, Vikas, et al. "One-class matrix completion with low-density factorizations." Data Mining (ICDM), 2010 IEEE 10th International Conference on. IEEE, 2010. Rendle, Steffen, et al. "BPR: Bayesian personalized ranking from implicit feedback." Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2009. Zou, Hui, and Trevor Hastie. "Regularization and variable selection via the elastic net." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67.2 (2005): 301-320. Ning, Xia, and George Karypis. "SLIM: Sparse linear methods for top-n recommender systems." Data Mining (ICDM), 2011 IEEE 11th International Conference on. IEEE, 2011. 36 / 36

×