talk at KTH 14 May 2014 about matrix factorization, different latent and neighborhood models, graphs and energy diffusion for recommender systems, as well as what makes good/bad recommendations.
2. About me
Interests in:
• IR, RecSys, Big Data, ML, NLP, SNA,
Graphs, CV, Data Visualization, Discourse
Analysis
History:
• 2002-2006: almost-BA Computer Science @
Amsterdam Tech Uni (dropped out in 2006)
• 2006-2010: BA Cultural Anthropology @
Leiden & Amsterdam Uni’s
• 2010-2012: MA Social Anthropology @
Stockholm Uni
• 2011-Current: Working @ Vionlabs
se.linkedin.com/in/roelofpieters/
roelof@vionlabs.com
3. Say Hello!
St: Eriksgatan 63
112 33 Stockholm - Sweden
Email: hello@vionlabs.com
Tech company here in Stockholm with Geeks
and Movie lovers…
Since 2009:
• Digital ecosystems for network operators,
cable TV companies, and film distributor
such as Tele2/Comviq, Cyberia, and
Warner Bros
• Various software and hardware hacks for
different companies: Webbstory, Excito,
Spotify, Samsung
Focus since 2012:
• Movie and TV recommendation
service
FoorSee
9. Information Retrieval
• Recommender
Systems as part of
Information Retrieval
Document(s)Document(s)Document(s)Document(s)Document(s)
Retrieval
USER
Query
• Information Retrieval is
the activity of obtaining
information resources
relevant to an
information need from a
collection of information
resources.
10. IR: Measure Success
• Recall: success in retrieving all correct documents
• Precision: success in retrieving the most relevant
documents
• Given a set of terms and a set of document terms
select only the most relevant documents
(precision), and preferably all the relevant ones
(recall)
14. Taxonomy of RS
• Collaborative Filtering (CF)
• Content Based Filtering (CBF)
• Knowledge Based Filtering (KBF)
• Hybrid
15. Taxonomy of RS
• Collaborative Filtering (CF)!
• Content Based Filtering (CBF)
• Knowledge Based Filtering (KBF)
• Hybrid
16. Collaborative Filtering:
• relies on past user behavior
• Implicit feedback
• Explicit feedback
• requires no gathering of external data
• sparse data
• domain free
• cold start problem
16
33. 2000+: Commercial CF’s
• 2001: Amazon starts using item based collaborative
filtering (Patent filed at 1998)
• 2000: Pandora starts music genome
project, where each song“is analyzed using up to 450
distinct musical characteristics by a trained music analyst.”
• 2006-2009: Netflix Contents: 2 of many algorithms put
in use by Netflix replacing “Cinematch": Matrix
Factorization (SVD) and Restricted Boltzmann
Machines (RBM)
(http://www.pandora.com/about/mgp)
(http://www.netflixprize.com)
52. Example: Item to Query
Title Price Genre Rating
The Avengers 5 Action 3,7
Spiderman II 10 Action 4,5
user query q :
“price (6) AND genre(Adventure) AND rating (4)”
weights of features: 0.22 0.450.33
Sim(q,”The Avengers”) =
0.22 x (1 - 1/25) + 0.33 x 0 + 0.45 x (1 - 0.3/5) = 0.6342
1-25 price range no matchdiff of 1 diff of 0.3 0-5 rating range
Sim(q,”Spiderman II”) = 0.5898
(0.6348 if we count rating 4.5 > 4 as match)
Weighted Sum:
54. Example: Item to Item Similarity
Title ReleaseTime Genres Actors Rating
TA 90s, start 90s, 1993 Action, Comedy, Romance X,Y,Z 3,7
S2 90s, start 90s, 1991 Action W,X,Z 4,5
numeric
Array of Booleans
Sim(X,Y) = 1 - d(X,Y)
or
Sim(X,Y) = exp(- d(X,Y))
where 0 ≤ wi ≤ 1, and i=1..n (number of features).
Set of hierarchical
related symbols
55. Title ReleaseTime Genres Actors Rating
TA 90s, start 90s, 1993 Action, Comedy, Romance X,Y,Z 3,7
S2 90s, start 90s, 1991 Action W,X,Z 4,5
numeric
Array of Booleans
Set of hierarchical
related symbols
X1 = (90s,S90s,1993)
X2 = (1,1,1)
X3 = (0,1,1,1)
X4 = 3.7
TA
W 0.5 0.3 0.2
X1 = (90s,S90s,1991)
X2 = (1,0,0)
X3 = (1,1,0,1)
X4 = 4.5
S2
weights of feature all the same
weights of categories within “Release
time” different
Example: Item to Item Similarity
59. Example: Item to User
Title Roelof Klas Mo Max X
(Action)
X
(
)
The Avengers 5 1 2 5 0.8 0.1
Spiderman II ? 2 1 ? 0.9 0.2
American Pie 2 5 ? 1 0.05 0.9
X(1) =
1
0.8
0.1
For each user u, learn a parameter ∈
R(n+1)
.
Predict user u as rating movie i with
( )T
x(i)
60. Title Roelof Klas Mo Max X
(Action)
X
(
)
The Avengers 5 1 2 5 0.8 0.1
Spiderman II ? 2 1 ? 0.9 0.2
American Pie 2 5 ? 1 0.05 0.9
Mo ( (3)
) and Klas ( (2)
)
predict rating Mo ( (3)
),
American pie (X(3)
)
(2) (3)(1) (4)
X(1)
X(2)
X(3)
X(3) =
1
0.05
0.9
temp
(3)
=
0
0
5
Example: Item to User
61. Title Roelof Klas Mo Max X
(Action)
X
(
)
The Avengers 5 1 2 5 0.8 0.1
Spiderman II ? 2 1 ? 0.9 0.2
American Pie 2 5 ? 1 0.05 0.9
Mo ( (3)
) and Klas ( (2)
)
predict rating Mo ( (3)
),
American pie (X(3)
)
(2) (3)(1) (4)
X(1)
X(2)
X(3)
1
0.05
0.9
0
0
5
Example: Item to User
dot product
≈ 4.5
62. Title Roelof Klas Mo Max X
(Action)
X
(
)
The Avengers 5 1 2 5 0.8 0.1
Spiderman II ? 2 1 ? 0.9 0.2
American Pie 2 5 4.5 1 0.05 0.9
Mo ( (3)
) and Klas ( (2)
)
predict rating Mo ( (3)
),
American pie (X(3)
)
(2) (3)(1) (4)
X(1)
X(2)
X(3)
1
0.05
0.9
0
0
5
Example: Item to User
dot product
≈ 4.5
63. Title Roelof Klas Mo Max X
(Action)
X
(
)
The Avengers 5 1 2 5 0.8 0.1
Spiderman II ≈4 2 1 ≈4 0.9 0.2
American Pie 2 5 4.5 1 0.05 0.9
How do we learn these user factor parameters?
(2) (3)(1) (4)
X(1)
X(2)
X(3)
Example: Item to User
64. problem formulation:!
• r(i,u) = 1 if user u has rated movie i, otherwise 0
• y
(i,u)
= rating by user u on movie i (if defined)
•
(u)
= parameter vector for user u
• x
(i)
= feature vector for movie i
• For user u, movie i, predicted rating: ( )
T
(x
(i)
)
• temp m
(u)
= # of movies rated by user u
min ∑ ( ( (u))T!(i) - "(i,u) )2 + ∑ ( )2
ƛ
——
2
#
k=1
(u)
(u)
1
2
——
m(u)m(u)
Example: Item to User
Say what?• learning (u) =
(A. Ng. 2013)
65. min ∑ ∑ (( (u))T!(i) - "(i,u))2 + ∑ ∑ ( )2
ƛ
—
2
#
u=1
problem formulation:!
• learning (u):
• learning (1), (2) , … , #
:
#
1
2
—
min ∑ ( ( (u))T!(i) - "(i,u) )2 + ∑ ( )2
ƛ
—
2
#
k=1
(u)
(u)
1
2
—
#u
k=1
(u)
regularization term
#
squared error term
actualpredicted
learn for “all” users
Example: Item to Userremember:
y = rating
parameter vector for a user
x = feature vector for a movie
67. Collaborative Filtering:
• User-based approach!
• Find a set of users Si who rated item j, that are most similar to
ui
• compute predicted Vij score as a function of ratings of item j
given by Si (usually weighted linear combination)
• Item-based approach!
• Find a set of most similar items Sj to the item j which were
rated by ui
• compute predicted Vij score as a function of ui's ratings for Sj
68. Collaborative Filtering:
• Two primary models:
• Neighborhood models!
• focus on relationships between movies or users
• Latent Factor models
• focus on factors inferred from (rating) patterns
• computerized alternative to naive content creation
• predicts rating by dot product of user and movie locations
on known dimensions
68
(Sarwar, B. et al. 2001)
70. Neighbourhood Methods
• Problems:
• Ratings biased per user
• Ratings biased towards certain items
• Ratings change over time
• Ratings can rapidly change through real time
events (Oscar nomination, etc)
• Bias correction needed
71. Latent Factors
71
• latent factor models map users and items into a
latent feature space
• user's feature vector denotes the user's affinity to
each of the features
• item's feature vector represents how much the
item itself is related to the features.
• rating is approximated by the dot product of the
user feature vector and the item feature vector.
74. Latent Factor models
• Matrix Factorization:
• characterizes items + users by vectors of
factors inferred from (ratings or other user-
item related) patterns
• Given a list of users and items, and user-item
interactions, predict user behavior
• can deal with sparse data (matrix)
• can incorporate additional information
74
75. Matrix Factorization
• Dimensionality reduction
• Principal Components Analysis, PCA
• Singular Value Decomposition, SVD
• Non Negative Matrix Factorization, NNMF
76. Matrix Factorization: SVD
SVD, Singular Value Decomposition
• transforms correlated variables into a set of
uncorrelated ones that better expose the various
relationships among the original data items.
• identifies and orders the dimensions along
which data points exhibit the most variation.
• allowing us to find the best approximation of the
original data points using fewer dimensions.
77. SVD: Matrix Decomposition
77
U: document-to-concept similarity
matrix !
V: term-to-concept similarity matrix !
ƛ : its diagonal elements: ‘strength’ of
each concept !
(pic by Xavier Amatriain 2013)
78. SVD for
Collaborative Filtering
each item i associated with vector qi ∈ ℝf
each user u associated with vector pu ∈ ℝf
qi measures extent to which item possesses factors
pu measures extent of interest for user in items which
possess high on factors
user-item interactions modeled as dot products within
the factor space, measured by qi
T pu
user u rating on item i approximates: rui = qi
T pu
78
^
79. SVD for
Collaborative Filtering
• compute u,i mappings: qi,pu ∈ ℝ
f
• factor user, item matrix
• imputation (Sarwar et.al. 2000)
• model only observed ratings + regularization (Funk 2006; Koren
2008)
• learn factor vectors qi and pu by minimizing (regularized) squared
error on set of known ratings: approximate user u rating of item i,
denoted by rui, leading to Learning Algorithm:
79
^
83. Stochastic Gradient Descent
• optimizable by Stochastic Gradient Descent
(SGD) (Funk 2006)
• incremental learning
• loops trough ratings and computes prediction
error for predicted rating on rui :
• modify parameters by magnitude proportional
to y in opposite direction of the gradient, giving
learning rule:
83
and
85. Alternating Least Squares
• optimizable by Alternating Least Squares (ALS) (2006)
• both qi and pu unknown: minimum function not convex
—> can not be solved for a minimum.
• ALS rotates between fixing qi’s and pu’s
• Fix qi or pu makes optimization problem quadratic
—> one not optimized can now be solved
• qi and pu independently computed of other item/user factors:
parallelization
• Best for implicit data (dense matrix)
85
86. Alternating Least Squares
• rotates between fixing qi’s and pu’s
• when all pu’s fixed, recompute qi’s by solving a least
squares problem:
• Fix matrix P as some matrix P, so that minimization problem:
• or fix Q similarly as:
• Learning Rule:
86
where
and
87. • Add Biases:
• Add Input Sources: Implicit
Feedback:
pu in rui becomes (pu +
+ (…) )Add
Temporal Aspect / time-varying
parameters
• Vary Confidence Levels of Inputs
Develop Further…
87
and
pic: Lei Guo 2012
(Salakhutdinov &
Mnih 2008; Koren 2010)
90. • So what if we don’t have any content factors
known?
• Probabilistic Matrix Factorization to the rescue!
• describe each user and each movie by a
small set of attributes
91. Probabilistic Matrix
Factorization
• Imagine we have the following rating data:
we could say that Roelof and Klas like Action
movies, but don’t like Comedy’s, while its the
opposite for Mo and Max
Title Roelof Klas Mo Max
The Avengers 5 1 1 4
Spiderman II 4 2 1 5
American Pie 3 5 4 1
Shrek 1 4 5 2
92. Probabilistic Matrix
Factorization
• This could be represented by the PMF model by using three
dimensional vectors to describe each user and each movie.
• example latent vectors: • AV: [0, 0.3]
• SPII: [1, 0.3]
• AP [1, 0.3]
• SH [1, 0.3]
• Roelof: [0, 3]
• Klas: [8, 3]
• Mo [10, 3]
• Max [10, 3]
• predict rating by dot product
of user vector with the item
vector
• So predicting Klas’ rating for
Spiderman II = 8*1 + 3*0.3 =
• But descriptions of users
and movies not known
ahead of time.
• PGM discovers such latent
characteristics
97. k-Nearest Neighbor s
• non parametric lazy learning algorithm
• data as feature space
• simple and fast
• k-nn classification
• k-nn regression: density estimation
98. kNN: Classification
• Classify
• several Xi used to classify Y
• compare (X1
p,X2
p) and (X1
q,Xq) by Squared
Euclidean distance:
d2
pq = (X1
p - x1
q)2 + (X2
p - X2q)2
• find k-Nearest Neighbors
99. kNN: Classification
• input: content extracted emotional values of 561
movies. thanks: Johannes Östling :)
ie:
dimensions of
movie
“Hamlet”:
104. Rating predictions:
• Pos — Neg
• Average
• Bayesian (Weighted) Estimates
• Lower bound of Wilson score confidence interval
for a Bernoulli parameter
105. Rating predictions:
• Pos — Neg!
• Average
• Bayesian (Weighted) Estimates
• Lower bound of Wilson score confidence interval
for a Bernoulli parameter
106. P — N
• (Positive ratings) - (Negative ratings)
• Problematic:
(http://www.urbandictionary.com/define.php?term=movies)
107. Rating predictions:
• Pos — Neg
• Average!
• Bayesian (Weighted) Estimates
• Lower bound of Wilson score confidence interval
for a Bernoulli parameter
109. Rating predictions:
• Pos — Neg
• Average
• Bayesian (Weighted) Estimates!
• Lower bound of Wilson score confidence interval
for a Bernoulli parameter
110. Ratings
• Top Ranking at IMDB (gives Bayesian estimate):
• Weighted Rating (WR) =
(v / (v+m)) × R + (m / (v+m)) × C!
• Where:
R = average for the movie (mean) = (Rating)
v = number of votes for the movie = (votes)
m = minimum votes required to be listed in the Top 250
(currently 25000)
C = the mean vote across the whole report (currently 7.0)
112. Bayesian (Weighted)
Estimates @ IMDB
Bayesian Weights for m = 1250
0"
0,1"
0,2"
0,3"
0,4"
0,5"
0,6"
0,7"
0,8"
0,9"
1"
0" 250" 500" 750" 1000" 1250" 2000" 3000" 4000" 5000"
specific" global"
• specific part for
individual items
• global part is
constant over all
items
• can be
precalculated
114. Rating predictions:
• Pos — Neg
• Average
• Bayesian (Weighted) Estimates
• Lower bound of Wilson score confidence
interval for a Bernoulli parameter
115. Wilson Score interval
• 1927 by Edwin B. Wilson
• Given the ratings I have, there is a 95% chance
that the "real" fraction of positive ratings is at
least what?
116. Wilson Score interval
• used by Reddit for comments ranking
• “rank the best comments highest
regardless of their submission time”
• algorithm introduced to Reddit by
Randall Munroe (the author of xkcd).
• treats the vote count as a statistical sampling of a
hypothetical full vote by everyone, much as in an
opinion poll.
117. Wilson Score interval
• Endpoints for Wilson Score interval:
• Reddit’s comment Ranking function
(phat+z*z/(2*n) - z*sqrt((phat*(1-phat) + z*z/(4*n))/n))
/(1+z*z/n)
125. Graph Based Approaches
• Whats a Graph?
• Why Graphs?!
• Who uses Graphs?
• Talking with Graphs
• Graph example: Recommendations
• Graph example: Data Analysis
126. Why Graphs?
• more complex (social networks…)
• more connected (wikis, pingbacks, rdf, collaborative
tagging)
• more semi-structured (wikis, rss)
• more decentralized: democratization of content production
(blogs, twitter*, social media*)
and just: MORE
Its the nature of todays Data, which is getting:
127. Data Trend
“Every 2 days we
create as much
information as we did
up to 2003”
— Eric Schmidt, Google
Why Graphs?
136. Graph Based Approaches
• Whats a Graph?
• Why Graphs?
• Who uses Graphs?
• Talking with Graphs!
• Graph example: Recommendations
• Graph example: Data Analysis
137. Talking with Graphs
• Graphs can be queried!
• no unions for comparison, but traversals!
• many different graph traversal patterns
(xkcd)
138. graph traversal patterns
• traversals can be seen as a diffusion
proces over a graph!
• “Energy” moves over a graph and spreads
out through the network!
• energy:
(Ghahramani 2012)
144. Graph Based Approaches
• Whats a Graph?
• Why Graphs?
• Who uses Graphs?
• Talking with Graphs
• Graph example: Recommendations!
• Graph example: Data Analysis
145. Diffusion Example:
Recommendations
• Energy diffusion is an easy algorithms for
making recommendations!
• different paths make different
recommendations!
• different paths for different problems can
be solved on same graph/domain!
• recommendation = “jumps” through the
data
147. Friend
Recommendation
• Who are my friends’ friends
• Who are my friends’ friends that are not
me or my friends
G.V(‘me’).outE[knows].inV.outE.inV
G.V(‘me’).outE[knows].inV.aggregate(x).outE.
inV{!x.contains(it)}
148. Product
Recommendation
• Who likes what I like —> of these things, what
do they like which I dont’ already like
(pic by Marko A. Rodriguez, 2011)
149. Product
Recommendation
• Who likes what I like
• Who likes what I like —> of these things, what
do they like which I dont’ already like
• Who likes what I like —> of these things, what
do they like which I dont’ already like
G.V(‘me’).outE[likes].inV.inE[likes].outV
G.V(‘me’).outE[likes].inV.aggregate(x).inE[likes].
outV.outE[like].inV{!x.contains(it)}
G.V(‘me’).outE[likes].inV.inE[likes].outV.outE[like].inV
159. References
• J. Dietmar, G. Friedrich and M. Zanker (2011) “Recommender Systems”,
International Joint Conference on Artificial Intelligence Barcelona
• Z. Ghahramani (2012) “Graph-based Semi-supervised Learning”, MLSS,
La Palma
• D. Goldbergs, D. Nichols, B.M. Oki and D. Terry (1992) “Using
collaborative filtering to weave an information tapestry”, Communications
of the ACM 35 (12)
• M. Hunger (2013) “Data Modeling with Neo4j”, http://
www.slideshare.net/neo4j/data-modeling-with-neo4j-25767444
• S. Funk (2006) “Netflix Update: Try This at Home”, sifter.org/~simon/
journal/20061211.html
159
160. References
• Y. Koren (2008) “Factorization meets the Neighborhood: A
Multifaceted Collaborative Filtering Model”, SIGKDD, http://
public.research.att.com/~volinsky/netflix/kdd08koren.pdf
• Y. Koren & C. Bell, (2007) “Scalable Collaborative Filtering with
Jointly Derived Neighborhood Interpolation Weights”
• Y, Koren (2010) “Collaborative filtering with temporal dynamics”
• A. Ng. (2013) Machine Learning, ml-004 @ Coursera
• A. Paterek (2007) “Improving Regularized Singular Value
Decomposition for Collaborative Filtering”, KDD
160
161. References
• P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom and J. Riedl (1994),
“GroupLens: An Open Architecture for Collaborative Filtering of
Netnews”, Proceedings of ACM
• B.R. Sarwar et al. (2000) “Application of Dimensionality Reduction in
Recommender System—A case Study”, WebKDD
• B. Saewar, G. Karypis, J. Konstan, J, Riedl (2001) “Item-Based
Collaborative Filtering Recommendation Algorithms”
• R. Salakhutdinov & A. Mnih (2008) “Probabilistic Matrix
Factorization”
• xkcd.com
161
162. Take Away Points
• Focus on the best Question, not just the Answer…!
• Best Match (most similar) vs Most Popular!
• Personalized vs Global Factors!
• Less is More ?!
• What is relevant?