SlideShare une entreprise Scribd logo
1  sur  49
Télécharger pour lire hors ligne
Word Embeddings: Why
the Hype ?
Hady Elsahar
Hady.elsahar@univ-st-etienne.fr
slides available at :
Introduction
● Why vector for Natural language ?
● Convensional representations for words and documents
● Methods of Dimensionality reduction
Deep learning models:
● Continuous Bag of words model
● Other Models (SKip Gram Model, GloVe)
● Evaluation of Word Vectors
● Readings and references
Introduction: Why Vectors
Document Classification or Clustering :
● Documents composed of words
● Similar documents will contain similar words
● Machine Learning love vectors
● A Machine Learning algorithm shall know
which words are significant which category
Bag of Words Model
“Represent each document which the bag of words it contains”
d1 : Mary loves Movies, Cinema and Art Class 1 : Arts
d2 : John went to the Football game Class 2 : Sports
d3 : Robert went for the Movie Delicatessen Class : Arts
Mary Loves Movies Cinema Art John Went to the Delicatessen Robert Football Game and for
d1 1 1 1 1 1 1
d2 1 1 1 1 1 1
d3 1 1 1 1 1
Bag of Words Model
Can a Machine learning algorithm know that “the” and “for” are un important
words ?
● Yes : But will need lots of training labeled data
What to do ?
● Use hand crafted features (weighting features for words)
● Make lots of them
● Keep doing this for 50 years
● Regret later .. cry hard
Bag of Words Model + Weghting eiFeatures
Weighting features example TF-IDF
● TF-IDF ~= Term Frequency / Document frequency
● Motivation : Words appearing in large number of documents are not
significant
Mary Loves Movies Cinema Art John Went to the Delicatessen Robert Football Game and for
d1 0.3779 0.3779 0.3779 0.3779 0.3779 0.0001
d2 0.4402 0.001 0.02 0.4558 0.458
d3 0.001 0.01 0.01 0.458 0.0001
Word Vector Representations
Document can be represented by words, But how to represent words
themselves ?
“You shall know a word by the
company it keeps”
Word Vector Representations
Use a sliding window over a big corpus of text and count word co-occurences in
between.
1. I enjoy flying.
2. I like NLP.
3. I like deep learning.
Bag of words Representations: Drawbacks
● High dimensionality and Very sparse !!!!!
● Unable to capture word order
○ “ good but expensive” “expensive but good” will have same representation.
● Unable to capture semantic similarities (mostly because of sparsity)
○ “boy”, “girl” and “car”
○ “Human”, “Person” and “Giraffe”
Bag of words Representations: Drawbacks
How to over come this ?
● Keep using hand crafted features
● Make lots of them
● Keep doing this for 50 years
● Regret later .. cry hard
Or … Dimensionality reduction
Dimensionality Reduction using Matrix
factorization
Singular value decomposition
where : σ1
> σ2
.. > σn
> 0
Singular value decomposition
● Lower dimensionality K << |V|
● taking the most significant projection of your vectors
space
Latent semantic Indexing / Analysis (1994)
⋃ : are dense word vector representations
V : are dense Document vector representations
LSA / LSI , HAL methods made huge advancements in document retrieval and
semantic similarity
Deep learning Word Embeddings (2003)
“A Neural Probabilistic Language Model” Bengio et al. 2003
Original task “Language Modeling” :
- Prediction of next word given sequence of previous words.
- Useful in Speech Recognition, Autcompletion, Machine translation.
“The Cat Chills on a mat ” , Calculate : P( mat | the, cat, chills, on, a )
Deep learning Word Embeddings (2003)
“A Neural Probabilistic Language Model” Bengio et al. 2003
Quoting from the paper:
“This is intrinsically difficult because of the curse of dimensionality: a word
sequence on which the model will be tested is likely to be different from all the
word sequences seen during training.”
“We propose to fight the curse of dimensionality by learning a distributed
representation for words”
Continuous Bag of Words model (CBOW)
Tomas Mikolov et al. (2013)
The model Predicts the current word given the context
scan text in large corpus with a window
Input : x0
, x1
, x3
, x4
output : x2
“ The Cat Chills on a mat ”
x0
x1
x2
x3
x4
x5
Continuous Bag of Words model (CBOW)(2013)
| V | vocabulary size
Χi
∈ R 1 x | V |
1 hot vector representation of each word
yi
∈ R| V | x 1
one hot representation of the correct middle word (expected output)
1 0 0 0 0 0yi
0 0 0 0 1 0 0
0 0 0 1 0 0 0
0 0 0 0 0 0 1
0 0 1 0 0 0 0
x0
x1
x3
x4
| V |
Black box
Continuous Bag of Words model (CBOW)(2013)
| V | vocabulary size
Χi
∈ R 1 x | V |
1 hot vector representation of each word
yi
∈ R| V | x 1
one hot representation of the correct middle word (expected output)
yi
x0
x1
x3
x4
W(1)
Average
W(2) softmax
Continuous Bag of Words model (CBOW)(2013)
n arbitary length of our word embeddings
W(1)
∈ Rn × |V|
Input word vector
ui
∈ R n x 1
Representation of Xi
After multiplication with input matrix
0 0 0 0 1 0 0
0 0 0 1 0 0 0
0 0 0 0 0 0 1
0 0 1 0 0 0 0
x0
x1
x3
x4
| V |
0 1 3
1 3 6
5 0 3
9 8 0
2 2 2
5 6 7
8 8 8
|V|
n
W(1)
2 2 2
9 8 0
8 8 8
5 0 3
u0
u1
u3
u4
n
Continuous Bag of Words model (CBOW)(2013)
hi
∈ R n x 1
hi
= Average of u0
u1
u3
u4
2 2 2
9 8 0
8 8 8
5 0 3
u0
u1
u3
u4
n
Average
20.25 4.5 3.25
hi
Continuous Bag of Words model (CBOW)(2013)
W (2)
∈ R n x | V |
Output word vector
Z ∈ R | V | x 1
Output vector representation of Xi
Z = hi
W(2)
| V |
W(2)
0 1 3 1 3 6 5
0 3 9 8 0 2 2
2 5 6 7 8 8 8
n 32 14 23 0.22 12 14 55 19
Z
| V |
20.25 4.5 3.25
hi
Continuous Bag of Words model (CBOW)(2013)
How to compare Z to yi
?
Largest value corresponds to the correct class ? … no Softmax
Softmax: squashes a K-dimensional vector of arbitrary real values to
a K-dimensional vector of real values in the range (0, 1)
1 0 0 0 0 0 0 0yi
32 14 23 0.22 2 14 55 19Z
Continuous Bag of Words model (CBOW)(2013)
y^ = softmax ( Z )
yi
∈ R| V | x 1
one hot representation of the correct middle word
1 0 0 0 0 0 0 0yi
32 14 23 0.22 2 14 55 19Z
y^ 0.7 0.1 0.02 0.08 0 0 0.1
Continuous Bag of Words model (CBOW)(2013)
● We need estimated words y^ to be closest to the original answer
● One common error function is the cross entropy H(yˆ, y) (why ?).
Since y is one hot vector
Continuous Bag of Words model (CBOW)(2013)
● We need estimated words y^ to be closest to the original answer
● One common error function is the cross entropy error H(yˆ, y) (why ?).
Since y is one hot vector
Continuous Bag of Words model (CBOW)(2013)
Perfect language model will expect the propability of the correct word y^i
= 1
So loss will be 0
Optimization task :
● Learn W(1)
and W(2)
to minimize the cost function over all the dataset.
● using back propagation, update weights in W(1)
and W(2)
Continuous Bag of Words model (CBOW)(2013)
0 1 3
1 3 6
5 0 3
9 8 0
2 2 2
5 6 7
8 8 8
|V|
n
W(1)
W (1)
:
● After training over a large corpus
● Each row represents a dense vector for each word in the
vocabulary
● These word vectors contains better semantic and syntactic
representation than other dense vectors ( will be proven later)
● These word vectors performs better for all NLP tasks (will be
proven later)
Skip Gram model (2013)
GloVe: Global Vectors for Word
Representation, Pennington et al. (2014)
Motivation:
ice - steam = ( solid, gas, water, fashion ) ?
● A distributional model should capture words that
appears with “ice” but not “steam”.
● Hence, doing well in semantic analogy task (explained
later)
GloVe: Global Vectors for Word
Representation, Pennington et al. (2014)
Starts from a co-oocurrrence matrix
p(solid | ice ) = Xsolid,ice
/ Xice
GloVe: Global Vectors for Word
Representation, Pennington et al. (2014)
Optimize the Objective function:
wi
word vector of word i
Pik
probability of word k to occurs in context of word i
Ok, But are word vectors really good ?!
Evaluation of word vectors :
1. Intrinsic evaluation : make sure it encodes semantic
information
2. Extrinsic evaluation : make sure it’s useful for other NLP
tasks (the hype)
Intrinsic Evaluation of Word Vectors
Word similarity task
Intrinsic Evaluation of Word Vectors
Results from : GloVe: Global Vectors for Word Representation, Pennington et al 2014.
Word similarity dataset “WS353”: http://www.cs.technion.ac.il/~gabr/resources/data/wordsim353/
Word similarity task
Intrinsic Evaluation of Word Vectors
Word Analogy task
Intrinsic Evaluation of Word Vectors
Word Analogy task
Evaluation data : https://word2vec.googlecode.com/svn/trunk/questions-words.txt
: capital-world
Abuja Nigeria Accra Ghana
: gram3-comparative
bad worse big bigger
: gram2-opposite
acceptable unacceptable aware unaware
: gram1-adjective-to-adverb
amazing amazingly apparent apparently
Intrinsic Evaluation of Word Vectors
Word Analogy task
Extrinsic Evaluation of Word Vectors
Part of Speech Tagging :
input : Word Embeddings are cool
output: Noun Noun Verb Adjective
Named Entity recognition :
input : Nous sommes charlie hebdo
output: Out Out Person Person
Extrinsic Evaluation of Word Vectors
* systems: POS: (Toutanova et al. 2003), NER: (Ando & Zhang 2005)
** 130,000-word embedding trained on Wikipedia and Reuters with 11 word window, 100 unit hidden
layer – for 7 weeks! – then supervised task training
*** Features are character suffixes for POS and a gazeteer for NER
“Unsupervised Pretraining”
(the secret sauce)
Problem:
1. Task T1
: Few training data (D1
)
2. Hand crafted Feature representation of inputs R1
3. Machine learning Algorithm M1
on T1
using R1
performs bad
Solution:
1. Create Task T2
: With lots of available training data (D2
)
(unsupervised) but has to have the same input as T1
2. Solve T2
using (D2
) and learn representation of the inputs (R2
)
3. R2
+ M1
better than R1
+ M1
on task T1
“Unsupervised Pretraining”
(the secret sauce)
But what also if ?:
Learn D3
while doing T1
using R2
and M1
Even better results !!
* Same architecture as C&W 2011, but word embeddings are kept constant during the supervised
training phase
** C&W is unsupervised pre-train + supervised NN + features model of last slide
word2vec :
https://code.google.com/p/word2vec/
GloVe :
http://nlp.stanford.edu/projects/glove/
Dependency based :
https://levyomer.wordpress.com/...
Pretrained Word vectors ready for use
Other word embeddings :
● Dependency Based Word embeddings: Levy et al. 2014 : http://www.aclweb.org.....
● Sentiment Analysis Word Embeddings: http://ai.stanford.edu/~ang/pap.....
Knowledge base embeddings :
● Structured Embeddings (SE) (Bordes et al ‘11 )
● Collective Matrix Factorization (RESCAL) (Nickel et al., ’11)
● Neural Tensor Networks (socher et al. ‘13)
● TATEC (Garcia-Duran et al., ’14)
Other Types of Embeddings:
Joint embeddings (Text + Knowledge bases):
● Joint Learning of Words and Meaning Representations (Bordes et al. ‘12)
● Knowledge Graph and Text Jointly Embedding (Wang et al ‘14)
Other Types of Embeddings:
References:
Before Word2Vec:
Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning representations by back-propagating errors."
Cognitive modeling 5 (1988): 3.
http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf
Bengio, Yoshua, et al. "A neural probabilistic language model." The Journal of Machine Learning Research 3 (2003): 1137-
1155.
http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf
References:
Word2vec (CBOW and Skip Gram):
Mikolov, Tomas, et al. "Efficient estimation of word representations in vector space." arXiv preprint arXiv:1301.3781 (2013).
Efficient Estimation of Word Representations in Vector Space.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean.
Distributed Representations of Words and Phrases and their Compositionality.
In Proceedings of NIPS, 2013.
Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic Regularities in Continuous Space Word Representations. In
Proceedings of NAACL HLT, 2013.
GloVe: Global Vectors for Word Representation, Pennington et al.(2014) http://www-nlp.stanford.edu/pubs/glove.pdf
Further Readings:
Negative sampling: http://papers.nips.cc/paper/....
Energy based learning : http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf
Joint learning (learning tasks simultaneously): http://ronan.collobert.com/pub...
Learning Resources
Deep Learning for NLP ( Stanford Course )
http://cs224d.stanford.edu/
Deep Learning for Natural Language Processing (without Magic : NAACL 2013 Tutorial
http://nlp.stanford.edu/courses/NAACL2013/

Contenu connexe

Tendances

Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Mihai Criveti
 

Tendances (20)

Natural language processing and transformer models
Natural language processing and transformer modelsNatural language processing and transformer models
Natural language processing and transformer models
 
A Simple Introduction to Word Embeddings
A Simple Introduction to Word EmbeddingsA Simple Introduction to Word Embeddings
A Simple Introduction to Word Embeddings
 
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
 
Transformer Introduction (Seminar Material)
Transformer Introduction (Seminar Material)Transformer Introduction (Seminar Material)
Transformer Introduction (Seminar Material)
 
Word_Embedding.pptx
Word_Embedding.pptxWord_Embedding.pptx
Word_Embedding.pptx
 
And then there were ... Large Language Models
And then there were ... Large Language ModelsAnd then there were ... Large Language Models
And then there were ... Large Language Models
 
BERT
BERTBERT
BERT
 
NLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLPNLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLP
 
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language ModelsScaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language Models
 
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
 
Introduction to Transformers for NLP - Olga Petrova
Introduction to Transformers for NLP - Olga PetrovaIntroduction to Transformers for NLP - Olga Petrova
Introduction to Transformers for NLP - Olga Petrova
 
NLP using transformers
NLP using transformers NLP using transformers
NLP using transformers
 
Introduction to Transformer Model
Introduction to Transformer ModelIntroduction to Transformer Model
Introduction to Transformer Model
 
Deep Learning for Natural Language Processing: Word Embeddings
Deep Learning for Natural Language Processing: Word EmbeddingsDeep Learning for Natural Language Processing: Word Embeddings
Deep Learning for Natural Language Processing: Word Embeddings
 
A Review of Deep Contextualized Word Representations (Peters+, 2018)
A Review of Deep Contextualized Word Representations (Peters+, 2018)A Review of Deep Contextualized Word Representations (Peters+, 2018)
A Review of Deep Contextualized Word Representations (Peters+, 2018)
 
Word embeddings
Word embeddingsWord embeddings
Word embeddings
 
CSCE181 Big ideas in NLP
CSCE181 Big ideas in NLPCSCE181 Big ideas in NLP
CSCE181 Big ideas in NLP
 
Deep learning for NLP and Transformer
 Deep learning for NLP  and Transformer Deep learning for NLP  and Transformer
Deep learning for NLP and Transformer
 
[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You Need[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You Need
 
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
 

Similaire à Word Embeddings, why the hype ?

Word2vec and Friends
Word2vec and FriendsWord2vec and Friends
Word2vec and Friends
Bruno Gonçalves
 
SNLI_presentation_2
SNLI_presentation_2SNLI_presentation_2
SNLI_presentation_2
Viral Gupta
 
Infrastructures et recommandations pour les Humanités Numériques - Big Data e...
Infrastructures et recommandations pour les Humanités Numériques - Big Data e...Infrastructures et recommandations pour les Humanités Numériques - Big Data e...
Infrastructures et recommandations pour les Humanités Numériques - Big Data e...
Patrice Bellot - Aix-Marseille Université / CNRS (LIS, INS2I)
 
Microsoft PROSE SDK: A Framework for Inductive Program Synthesis
Microsoft PROSE SDK: A Framework for Inductive Program SynthesisMicrosoft PROSE SDK: A Framework for Inductive Program Synthesis
Microsoft PROSE SDK: A Framework for Inductive Program Synthesis
Alex Polozov
 

Similaire à Word Embeddings, why the hype ? (20)

Word_Embeddings.pptx
Word_Embeddings.pptxWord_Embeddings.pptx
Word_Embeddings.pptx
 
A Panorama of Natural Language Processing
A Panorama of Natural Language ProcessingA Panorama of Natural Language Processing
A Panorama of Natural Language Processing
 
Word2vec and Friends
Word2vec and FriendsWord2vec and Friends
Word2vec and Friends
 
SNLI_presentation_2
SNLI_presentation_2SNLI_presentation_2
SNLI_presentation_2
 
Word2vec slide(lab seminar)
Word2vec slide(lab seminar)Word2vec slide(lab seminar)
Word2vec slide(lab seminar)
 
Contemporary Models of Natural Language Processing
Contemporary Models of Natural Language ProcessingContemporary Models of Natural Language Processing
Contemporary Models of Natural Language Processing
 
Neural Text Embeddings for Information Retrieval (WSDM 2017)
Neural Text Embeddings for Information Retrieval (WSDM 2017)Neural Text Embeddings for Information Retrieval (WSDM 2017)
Neural Text Embeddings for Information Retrieval (WSDM 2017)
 
Infrastructures et recommandations pour les Humanités Numériques - Big Data e...
Infrastructures et recommandations pour les Humanités Numériques - Big Data e...Infrastructures et recommandations pour les Humanités Numériques - Big Data e...
Infrastructures et recommandations pour les Humanités Numériques - Big Data e...
 
A Neural Probabilistic Language Model_v2
A Neural Probabilistic Language Model_v2A Neural Probabilistic Language Model_v2
A Neural Probabilistic Language Model_v2
 
Visual-Semantic Embeddings: some thoughts on Language
Visual-Semantic Embeddings: some thoughts on LanguageVisual-Semantic Embeddings: some thoughts on Language
Visual-Semantic Embeddings: some thoughts on Language
 
2022-10, UCL NLP meetup, Toward a Better Understanding of Relational Knowledg...
2022-10, UCL NLP meetup, Toward a Better Understanding of Relational Knowledg...2022-10, UCL NLP meetup, Toward a Better Understanding of Relational Knowledg...
2022-10, UCL NLP meetup, Toward a Better Understanding of Relational Knowledg...
 
Yoav Goldberg: Word Embeddings What, How and Whither
Yoav Goldberg: Word Embeddings What, How and WhitherYoav Goldberg: Word Embeddings What, How and Whither
Yoav Goldberg: Word Embeddings What, How and Whither
 
Embedding for fun fumarola Meetup Milano DLI luglio
Embedding for fun fumarola Meetup Milano DLI luglioEmbedding for fun fumarola Meetup Milano DLI luglio
Embedding for fun fumarola Meetup Milano DLI luglio
 
Deep Learning and Text Mining
Deep Learning and Text MiningDeep Learning and Text Mining
Deep Learning and Text Mining
 
Fasttext(Enriching Word Vectors with Subword Information) 논문 리뷰
Fasttext(Enriching Word Vectors with Subword Information) 논문 리뷰Fasttext(Enriching Word Vectors with Subword Information) 논문 리뷰
Fasttext(Enriching Word Vectors with Subword Information) 논문 리뷰
 
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and Applications
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and ApplicationsICDM 2019 Tutorial: Speech and Language Processing: New Tools and Applications
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and Applications
 
Word Embeddings (D2L4 Deep Learning for Speech and Language UPC 2017)
Word Embeddings (D2L4 Deep Learning for Speech and Language UPC 2017)Word Embeddings (D2L4 Deep Learning for Speech and Language UPC 2017)
Word Embeddings (D2L4 Deep Learning for Speech and Language UPC 2017)
 
Tiancheng Zhao - 2017 - Learning Discourse-level Diversity for Neural Dialog...
Tiancheng Zhao - 2017 -  Learning Discourse-level Diversity for Neural Dialog...Tiancheng Zhao - 2017 -  Learning Discourse-level Diversity for Neural Dialog...
Tiancheng Zhao - 2017 - Learning Discourse-level Diversity for Neural Dialog...
 
David Barber - Deep Nets, Bayes and the story of AI
David Barber - Deep Nets, Bayes and the story of AIDavid Barber - Deep Nets, Bayes and the story of AI
David Barber - Deep Nets, Bayes and the story of AI
 
Microsoft PROSE SDK: A Framework for Inductive Program Synthesis
Microsoft PROSE SDK: A Framework for Inductive Program SynthesisMicrosoft PROSE SDK: A Framework for Inductive Program Synthesis
Microsoft PROSE SDK: A Framework for Inductive Program Synthesis
 

Dernier

Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
PirithiRaju
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
Sérgio Sacani
 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptx
gindu3009
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Sérgio Sacani
 
Pests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdfPests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdf
PirithiRaju
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
ssuser79fe74
 

Dernier (20)

Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdf
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
 
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
 
Factory Acceptance Test( FAT).pptx .
Factory Acceptance Test( FAT).pptx       .Factory Acceptance Test( FAT).pptx       .
Factory Acceptance Test( FAT).pptx .
 
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
Feature-aligned N-BEATS with Sinkhorn divergence (ICLR '24)
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
 
Zoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfZoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdf
 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptx
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
 
Pests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdfPests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdf
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
 
American Type Culture Collection (ATCC).pptx
American Type Culture Collection (ATCC).pptxAmerican Type Culture Collection (ATCC).pptx
American Type Culture Collection (ATCC).pptx
 
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxCOST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
 

Word Embeddings, why the hype ?

  • 1. Word Embeddings: Why the Hype ? Hady Elsahar Hady.elsahar@univ-st-etienne.fr slides available at :
  • 2. Introduction ● Why vector for Natural language ? ● Convensional representations for words and documents ● Methods of Dimensionality reduction Deep learning models: ● Continuous Bag of words model ● Other Models (SKip Gram Model, GloVe) ● Evaluation of Word Vectors ● Readings and references
  • 3. Introduction: Why Vectors Document Classification or Clustering : ● Documents composed of words ● Similar documents will contain similar words ● Machine Learning love vectors ● A Machine Learning algorithm shall know which words are significant which category
  • 4. Bag of Words Model “Represent each document which the bag of words it contains” d1 : Mary loves Movies, Cinema and Art Class 1 : Arts d2 : John went to the Football game Class 2 : Sports d3 : Robert went for the Movie Delicatessen Class : Arts Mary Loves Movies Cinema Art John Went to the Delicatessen Robert Football Game and for d1 1 1 1 1 1 1 d2 1 1 1 1 1 1 d3 1 1 1 1 1
  • 5. Bag of Words Model Can a Machine learning algorithm know that “the” and “for” are un important words ? ● Yes : But will need lots of training labeled data What to do ? ● Use hand crafted features (weighting features for words) ● Make lots of them ● Keep doing this for 50 years ● Regret later .. cry hard
  • 6. Bag of Words Model + Weghting eiFeatures Weighting features example TF-IDF ● TF-IDF ~= Term Frequency / Document frequency ● Motivation : Words appearing in large number of documents are not significant Mary Loves Movies Cinema Art John Went to the Delicatessen Robert Football Game and for d1 0.3779 0.3779 0.3779 0.3779 0.3779 0.0001 d2 0.4402 0.001 0.02 0.4558 0.458 d3 0.001 0.01 0.01 0.458 0.0001
  • 7. Word Vector Representations Document can be represented by words, But how to represent words themselves ? “You shall know a word by the company it keeps”
  • 8. Word Vector Representations Use a sliding window over a big corpus of text and count word co-occurences in between. 1. I enjoy flying. 2. I like NLP. 3. I like deep learning.
  • 9. Bag of words Representations: Drawbacks ● High dimensionality and Very sparse !!!!! ● Unable to capture word order ○ “ good but expensive” “expensive but good” will have same representation. ● Unable to capture semantic similarities (mostly because of sparsity) ○ “boy”, “girl” and “car” ○ “Human”, “Person” and “Giraffe”
  • 10. Bag of words Representations: Drawbacks How to over come this ? ● Keep using hand crafted features ● Make lots of them ● Keep doing this for 50 years ● Regret later .. cry hard Or … Dimensionality reduction
  • 11. Dimensionality Reduction using Matrix factorization Singular value decomposition where : σ1 > σ2 .. > σn > 0
  • 12. Singular value decomposition ● Lower dimensionality K << |V| ● taking the most significant projection of your vectors space
  • 13. Latent semantic Indexing / Analysis (1994) ⋃ : are dense word vector representations V : are dense Document vector representations LSA / LSI , HAL methods made huge advancements in document retrieval and semantic similarity
  • 14. Deep learning Word Embeddings (2003) “A Neural Probabilistic Language Model” Bengio et al. 2003 Original task “Language Modeling” : - Prediction of next word given sequence of previous words. - Useful in Speech Recognition, Autcompletion, Machine translation. “The Cat Chills on a mat ” , Calculate : P( mat | the, cat, chills, on, a )
  • 15. Deep learning Word Embeddings (2003) “A Neural Probabilistic Language Model” Bengio et al. 2003 Quoting from the paper: “This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training.” “We propose to fight the curse of dimensionality by learning a distributed representation for words”
  • 16. Continuous Bag of Words model (CBOW) Tomas Mikolov et al. (2013) The model Predicts the current word given the context scan text in large corpus with a window Input : x0 , x1 , x3 , x4 output : x2 “ The Cat Chills on a mat ” x0 x1 x2 x3 x4 x5
  • 17. Continuous Bag of Words model (CBOW)(2013) | V | vocabulary size Χi ∈ R 1 x | V | 1 hot vector representation of each word yi ∈ R| V | x 1 one hot representation of the correct middle word (expected output) 1 0 0 0 0 0yi 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 x0 x1 x3 x4 | V | Black box
  • 18. Continuous Bag of Words model (CBOW)(2013) | V | vocabulary size Χi ∈ R 1 x | V | 1 hot vector representation of each word yi ∈ R| V | x 1 one hot representation of the correct middle word (expected output) yi x0 x1 x3 x4 W(1) Average W(2) softmax
  • 19. Continuous Bag of Words model (CBOW)(2013) n arbitary length of our word embeddings W(1) ∈ Rn × |V| Input word vector ui ∈ R n x 1 Representation of Xi After multiplication with input matrix 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 x0 x1 x3 x4 | V | 0 1 3 1 3 6 5 0 3 9 8 0 2 2 2 5 6 7 8 8 8 |V| n W(1) 2 2 2 9 8 0 8 8 8 5 0 3 u0 u1 u3 u4 n
  • 20. Continuous Bag of Words model (CBOW)(2013) hi ∈ R n x 1 hi = Average of u0 u1 u3 u4 2 2 2 9 8 0 8 8 8 5 0 3 u0 u1 u3 u4 n Average 20.25 4.5 3.25 hi
  • 21. Continuous Bag of Words model (CBOW)(2013) W (2) ∈ R n x | V | Output word vector Z ∈ R | V | x 1 Output vector representation of Xi Z = hi W(2) | V | W(2) 0 1 3 1 3 6 5 0 3 9 8 0 2 2 2 5 6 7 8 8 8 n 32 14 23 0.22 12 14 55 19 Z | V | 20.25 4.5 3.25 hi
  • 22. Continuous Bag of Words model (CBOW)(2013) How to compare Z to yi ? Largest value corresponds to the correct class ? … no Softmax Softmax: squashes a K-dimensional vector of arbitrary real values to a K-dimensional vector of real values in the range (0, 1) 1 0 0 0 0 0 0 0yi 32 14 23 0.22 2 14 55 19Z
  • 23. Continuous Bag of Words model (CBOW)(2013) y^ = softmax ( Z ) yi ∈ R| V | x 1 one hot representation of the correct middle word 1 0 0 0 0 0 0 0yi 32 14 23 0.22 2 14 55 19Z y^ 0.7 0.1 0.02 0.08 0 0 0.1
  • 24. Continuous Bag of Words model (CBOW)(2013) ● We need estimated words y^ to be closest to the original answer ● One common error function is the cross entropy H(yˆ, y) (why ?). Since y is one hot vector
  • 25. Continuous Bag of Words model (CBOW)(2013) ● We need estimated words y^ to be closest to the original answer ● One common error function is the cross entropy error H(yˆ, y) (why ?). Since y is one hot vector
  • 26. Continuous Bag of Words model (CBOW)(2013) Perfect language model will expect the propability of the correct word y^i = 1 So loss will be 0 Optimization task : ● Learn W(1) and W(2) to minimize the cost function over all the dataset. ● using back propagation, update weights in W(1) and W(2)
  • 27. Continuous Bag of Words model (CBOW)(2013) 0 1 3 1 3 6 5 0 3 9 8 0 2 2 2 5 6 7 8 8 8 |V| n W(1) W (1) : ● After training over a large corpus ● Each row represents a dense vector for each word in the vocabulary ● These word vectors contains better semantic and syntactic representation than other dense vectors ( will be proven later) ● These word vectors performs better for all NLP tasks (will be proven later)
  • 28. Skip Gram model (2013)
  • 29. GloVe: Global Vectors for Word Representation, Pennington et al. (2014) Motivation: ice - steam = ( solid, gas, water, fashion ) ? ● A distributional model should capture words that appears with “ice” but not “steam”. ● Hence, doing well in semantic analogy task (explained later)
  • 30. GloVe: Global Vectors for Word Representation, Pennington et al. (2014) Starts from a co-oocurrrence matrix p(solid | ice ) = Xsolid,ice / Xice
  • 31. GloVe: Global Vectors for Word Representation, Pennington et al. (2014) Optimize the Objective function: wi word vector of word i Pik probability of word k to occurs in context of word i
  • 32. Ok, But are word vectors really good ?! Evaluation of word vectors : 1. Intrinsic evaluation : make sure it encodes semantic information 2. Extrinsic evaluation : make sure it’s useful for other NLP tasks (the hype)
  • 33. Intrinsic Evaluation of Word Vectors Word similarity task
  • 34. Intrinsic Evaluation of Word Vectors Results from : GloVe: Global Vectors for Word Representation, Pennington et al 2014. Word similarity dataset “WS353”: http://www.cs.technion.ac.il/~gabr/resources/data/wordsim353/ Word similarity task
  • 35. Intrinsic Evaluation of Word Vectors Word Analogy task
  • 36. Intrinsic Evaluation of Word Vectors Word Analogy task Evaluation data : https://word2vec.googlecode.com/svn/trunk/questions-words.txt : capital-world Abuja Nigeria Accra Ghana : gram3-comparative bad worse big bigger : gram2-opposite acceptable unacceptable aware unaware : gram1-adjective-to-adverb amazing amazingly apparent apparently
  • 37. Intrinsic Evaluation of Word Vectors Word Analogy task
  • 38. Extrinsic Evaluation of Word Vectors Part of Speech Tagging : input : Word Embeddings are cool output: Noun Noun Verb Adjective Named Entity recognition : input : Nous sommes charlie hebdo output: Out Out Person Person
  • 39. Extrinsic Evaluation of Word Vectors * systems: POS: (Toutanova et al. 2003), NER: (Ando & Zhang 2005) ** 130,000-word embedding trained on Wikipedia and Reuters with 11 word window, 100 unit hidden layer – for 7 weeks! – then supervised task training *** Features are character suffixes for POS and a gazeteer for NER
  • 40. “Unsupervised Pretraining” (the secret sauce) Problem: 1. Task T1 : Few training data (D1 ) 2. Hand crafted Feature representation of inputs R1 3. Machine learning Algorithm M1 on T1 using R1 performs bad Solution: 1. Create Task T2 : With lots of available training data (D2 ) (unsupervised) but has to have the same input as T1 2. Solve T2 using (D2 ) and learn representation of the inputs (R2 ) 3. R2 + M1 better than R1 + M1 on task T1
  • 41. “Unsupervised Pretraining” (the secret sauce) But what also if ?: Learn D3 while doing T1 using R2 and M1
  • 42. Even better results !! * Same architecture as C&W 2011, but word embeddings are kept constant during the supervised training phase ** C&W is unsupervised pre-train + supervised NN + features model of last slide
  • 43. word2vec : https://code.google.com/p/word2vec/ GloVe : http://nlp.stanford.edu/projects/glove/ Dependency based : https://levyomer.wordpress.com/... Pretrained Word vectors ready for use
  • 44. Other word embeddings : ● Dependency Based Word embeddings: Levy et al. 2014 : http://www.aclweb.org..... ● Sentiment Analysis Word Embeddings: http://ai.stanford.edu/~ang/pap..... Knowledge base embeddings : ● Structured Embeddings (SE) (Bordes et al ‘11 ) ● Collective Matrix Factorization (RESCAL) (Nickel et al., ’11) ● Neural Tensor Networks (socher et al. ‘13) ● TATEC (Garcia-Duran et al., ’14) Other Types of Embeddings:
  • 45. Joint embeddings (Text + Knowledge bases): ● Joint Learning of Words and Meaning Representations (Bordes et al. ‘12) ● Knowledge Graph and Text Jointly Embedding (Wang et al ‘14) Other Types of Embeddings:
  • 46. References: Before Word2Vec: Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning representations by back-propagating errors." Cognitive modeling 5 (1988): 3. http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf Bengio, Yoshua, et al. "A neural probabilistic language model." The Journal of Machine Learning Research 3 (2003): 1137- 1155. http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf
  • 47. References: Word2vec (CBOW and Skip Gram): Mikolov, Tomas, et al. "Efficient estimation of word representations in vector space." arXiv preprint arXiv:1301.3781 (2013). Efficient Estimation of Word Representations in Vector Space. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of NAACL HLT, 2013. GloVe: Global Vectors for Word Representation, Pennington et al.(2014) http://www-nlp.stanford.edu/pubs/glove.pdf
  • 48. Further Readings: Negative sampling: http://papers.nips.cc/paper/.... Energy based learning : http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf Joint learning (learning tasks simultaneously): http://ronan.collobert.com/pub...
  • 49. Learning Resources Deep Learning for NLP ( Stanford Course ) http://cs224d.stanford.edu/ Deep Learning for Natural Language Processing (without Magic : NAACL 2013 Tutorial http://nlp.stanford.edu/courses/NAACL2013/