From Word Embeddings To Document Distances
We present the Word Mover’s Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to “travel” to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover’s Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven state-of-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates.
3. The Question
How to check whether the statements/documents
infers the same meaning or not?
“Obama speaks to the media in Illinois”
and
“The President greets the press in Chicago.”
Same content. Different words.
4. Existing Ways
Documents commonly are represented as :
● Bag of Words(BOW)
● term frequency-inverse document frequency (TF-IDF) of the
words
● Based on the Count of common words
#Dimensions = #Vocabulary (thousands)
● Stuck if no words in common.
“Obama” != “President”
5. Existing Ways
BOW
● A model in which a text (such as a sentence or a document) is
represented as the bag (multiset) of its words, disregarding grammar
and even word order but keeping multiplicity.
● Commonly used in methods of document classification where the
(frequency of) occurrence of each word is used as a feature for training
a classifier.
6. Existing Ways
Tf-idf Model
A statistical measure used to evaluate how important a word is
to a document in a collection or corpus. The importance increases
proportionally to the number of times a word appears in the
document but is offset by the frequency of the word in the corpus.
Tf(term Frequency) Score
idf(inverse-doc frequency) Score
8. Existing Ways
BM25 Okapi
● A bag-of-words retrieval function that ranks a set of documents based
on the query terms appearing in each document, regardless of the
inter-relationship between the query terms within a document.
● A ranking function that extends TF-IDF for each word w in a
document D
10. Existing Ways
Drawbacks of previous models
● Representing documents via BOW or by TF-IDF representations
are often not suitable for document distances due to their
frequent near-orthogonality
● Another significant draw-back of these representations are that
they do not capture the distance between individual words.
11. Proposed Solutions
Low-dimensional latent features models
● Latent Semantic Indexing(LSI )
○ LSA assumes that words that are close in meaning will occur in
similar pieces of text.
○ uses singular value decomposition on the BOW representation to
arrive at a semantic feature space.
● Latent Dirichlet Allocation(LDA)
○ Generative model for text documents that learns representations
for documents as distributions over word topics.
12. Proposed Solutions
More Models...
● mSDA Marginalized Stacked Denoising Autoencoder
○ a representation learned from stacked denoting autoencoders (SDAs),
marginalized for fast training.
○ Generally, SDAs have been shown to have state-of-the-art performance
for document sentiment analysis tasks
13. Need for improved Solution
● While these low dimensional approaches produce a more coherent
document representation than BOW, they often do not improve the
empirical performance of BOW on distance-based tasks LDA.
Hence we need something better...
14. Improved Solution
Word Mover’s Distance (WMD)
● A novel distance function between text documents.
● The WMD distance measures the dissimilarity between two text
documents as the minimum amount of distance that the embedded
words of one document need to “travel” to reach the embedded words
of another document.
● Based on recent results in word embeddings that learn semantically
meaningful representations for words from local co-occurrences in
sentences.
● Based on the well known concept of Earth Mover’s Distance
15. Word Mover’s Distance (WMD)
● Built on top of Google’s word2vec and based on its embeddings
● Scaling to very large data because of word2vec
● semantic relationships are often preserved in vector operations on word
vectors, e.g., vec(Berlin) - vec(Germany) + vec(France) is close to
vec(Paris).
● It represent text documents as a weighted point cloud of embedded
words
Let's get familiar with the terms which are used in WMD….
16. Word Mover’s Distance (WMD)
● It is the collective name for a set of language modeling and feature
learning techniques in natural language processing (NLP) where words
or phrases from the vocabulary are mapped to vectors of real numbers.
● Stores each word in as a point in space, where it is represented by a
vector of fixed number of dimensions (generally 300).
● For example, “Hello” might be represented as : [0.4, -0.11, 0.55, 0.3 . . .
0.1, 0.02]
● Dimensions are basically projections along different axes, more of a
mathematical concept.
Word Embedding
17. ● vector d as a point on the n−1 dimensional simplex of word distributions
● Two documents with different unique words will lie in different regions of
this simplex. However, these documents may still be semantically close
● if word i appears ci times in the document, we denote di = ci / ∑j=1 to ncj
● It is very parse as most of the words will not appear in document
Word Mover’s Distance (WMD)
nBOW representation
Word travel cost
● measure of word dissimilarity is naturally provided by their Euclidean
distance in the word2vec embedding space
● The cost associated with “traveling” from one word to another
c(i, j) = ||xi − xj||2
18. Word Mover’s Distance (WMD)
● Uses “travel cost” between two words to create a distance between two
documents
● each word i in d to be transformed into any word j in d’ in total or in parts
Tij ≥ 0 denotes how much of word i in d travels to word j in d’ (T ∈ Rn×n)
● To transform d entirely into d’ we ensure that the entire outgoing flow
from word i equals di ,i.e. ∑j Tij= di and same applies for the incoming
● Hence, the distance between the two documents as the minimum
(weighted) cumulative cost required to move all words from d to d’ , i.e.
Document distance
19. This optimization is a special case of
the earth mover’s distance metric, a
well studied transportation problem
for which specialized solvers have
been developed, hence we can use
those solvers to calculate the WMD.
● the minimum cumulative cost of moving d to d’ given the constraints is pro
vided by the solution to the following linear program
Word Mover’s Distance (WMD)
Transportation problem
20. Word Mover’s Distance (WMD)
The components of the WMD metric
between a query D0 and two
sentences D1, D2 (with equal BOW
distance). The arrows represent flow
between two words and are labeled
with their distance contribution.
The flow between two sentences D3
and D0 with different numbers of
words. This mis match causes the
WMD to move words to multiple
similar words.
22. ● best average time complexity : O(p3 log p); p ⇾ # unique words in the
documents
● For datasets with many unique words (i.e., high-dimensional) and/or a
large number of documents, solving the WMD optimal transport problem
can become prohibitive.
Word Mover’s Distance (WMD)
Complexity
23. ● Twitter
○ Collection of “Polarity - Tweet” tweets
● BBC Sport
○ Documents related to 5 categories like athletics, cricket, football, rugby,
tennis
● Classic
○ Documents related to 5 categories like casm, med, etc
● News20Groups
○ Documents divided into 20 categories like space, medical, etc
Word Mover’s Distance (WMD)
DataSets Used
26. Word Mover’s Distance (WMD)
Github link :
● https://github.com/buggynap/Word-Movers-Distance
Reference :
● Word Mover's Distance from Matthew J Kusner's paper "From Word Embeddings to Document
Distances"
SideShare Link :