SlideShare une entreprise Scribd logo
1  sur  15
Télécharger pour lire hors ligne
Komachi Lab
M1 Ryosuke Miyazaki
2015/10/16
Cross-Lingual Sentiment Analysis using modified BRAE
Sarthak Jain and Shashank Batra
EMNLP 2015
EMNLP 2015 reading group
※ All figures in this slide are cited from original paper
Komachi Lab
Abstract
✤ To perform Cross Lingual Sentiment Analysis
- They use parallel corpus that include

resource rich (English) and resource poor (Hindi)
✤ They create new Movie Reviews Dataset in Hindi

for evaluation
✤ Their model significantly outperforms state of the art,

especially when labeled data is scarce
2
Komachi Lab
Model and Training
3
Komachi Lab
BRAE Model
4
Bilingually Constrained Recursive Auto-encoder
First, we consider standard Recursive Auto-encoder for each language respectively
construct parent vector reconstruct children vector
Minimize reconstruction errors (Euclidean distance)
c: child vector
y, p: parent vector
Komachi Lab
BRAE Model
5
Loss Function
They also produce representation from another language
Assumption
A phrase and its correct translation should

share the same semantic meaning
Loss Function about source language
Transforming loss
Like wise, they define for target language
Objective function
Komachi Lab
Training (Unsupervised)
✤ Word embeddings are pre-trained by Word2Vec
✤ 1st: Pre-train ps, and pt respectively on RAE
6
✤ 2nd: Fix pt and train ps on BRAE
- Vice-versa for ps
- Set ps = p’s, pt = p’t when it reaching a local minima.
Komachi Lab
Training (Supervise)
✤ Modification for Classifying Sentiment
✤ Adding Softmax and Cross entropy error functions

to only source language (resource rich language)
✤ In this phase, penalty term is included in reconstruction error
7
✤ And, transformation weights (θt
s, θs
t) are not updated in this phase
Komachi Lab
Training (Supervise)
✤ 1st: only update resource rich related parameters
8
ce: cross entropy
✤ 2nd: only update resource poor related parameters
- Since the gold labels are only associated with resource rich,

they use transformation to obtain sentiment distribution
✤ Predict overall sentiment associated with the resource poor
- concat pt, p’s then 

train by softmax regression using weight matrix
Komachi Lab
Experiments
9
Komachi Lab
Experimental Settings
✤ HindMonoCorp 0.5 (44.49M sentences) and

English Gigaword Corpus for word embeddings
✤ Bilingual sentence-aligned data from HindEnCrop
(273.9k sentence pairs)

10
For Unsupervised phase
For Supervised phase (use MOSES to obtain bilingual phrase pairs)
✤ IMDB11 dataset (25000 pos, 25000 neg)
✤ Rotten Tomatoes Review dataset (4 documents, {0, 1, 2, 3})
✤ Their model was able to correctly infer word sense for polysemous words
Komachi Lab
Experimental Setting
✤ Rating Based Hindi Movie Review Dataset (2945 movie reviews, {1, 2, 3, 4})

they create this new dataset for evaluation
✤ Standard Movie Reviews Dataset (125 positive, 125 negative)
11
Evaluation Data set
✤ learning rate: 0.05
✤ word vector dimension: 80
✤ joint error of BRAE (α): 0.2
✤ λL: 0.001
✤ λBRAE: 0.0001
Tuning by Grid Search on Cross Validation
✤ κ: 0.2, η: 0.35
✤ λp: 0.01
✤ λS: 0.1
✤ λT: 0.04
Komachi Lab
Results
✤ BRAE-U: neither include penalty term, nor fix the transformations weights
✤ BRAE-P: only include the penalty term
✤ BRAE-F: include both term
12
monolingual
cross lingual
monolingual
monolingual
monolingual
cross lingual
cross lingual
cross lingual Confusion matrix (BRAE-F)
Komachi Lab
Results
13
Accuracy with amount of

labeled training data used
✤ Their model achieve best performance even though

data are 50% less than those of others.
Accuracy with amount of

unlabeled training data used
Komachi Lab
Analysis
✤ Since the movement in semantic vector space was restricted, their
model have an advantage about unknown words
14
“Her acting of a schizophrenic mother made our hearts weep”
base line classify as negative due to “weep”, but their model correctly predict positive
Example:
✤ Their model was able to correctly infer word sense for polysemous words
Komachi Lab
Error Analysis
✤ conflicting sentiments about two different aspects about the same object
✤ presence of subtle contextual references
15
Difficult situation
✤ “His poor acting generally destroys a movie, but this time it didn’t”
- correct is positive, predict rate is 2
✤ “This movie made his last one looked good”
- wrong prediction of rating 3
Example of latter case

Contenu connexe

Tendances (8)

Dynamic Polymorphism in C++
Dynamic Polymorphism in C++Dynamic Polymorphism in C++
Dynamic Polymorphism in C++
 
PL Lecture 02 - Binding and Scope
PL Lecture 02 - Binding and ScopePL Lecture 02 - Binding and Scope
PL Lecture 02 - Binding and Scope
 
Decision properties of reular languages
Decision properties of reular languagesDecision properties of reular languages
Decision properties of reular languages
 
PL Lecture 01 - preliminaries
PL Lecture 01 - preliminariesPL Lecture 01 - preliminaries
PL Lecture 01 - preliminaries
 
Decision properties of reular languages
Decision properties of reular languagesDecision properties of reular languages
Decision properties of reular languages
 
Candeias sti lg2p_vfinal
Candeias sti lg2p_vfinalCandeias sti lg2p_vfinal
Candeias sti lg2p_vfinal
 
BERT
BERTBERT
BERT
 
Ds 7202 ct-ii
Ds 7202 ct-iiDs 7202 ct-ii
Ds 7202 ct-ii
 

Similaire à Cross-Lingual Sentiment Analysis using modified BRAE

A Study Of Statistical Models For Query Translation :Finding A Good Unit Of T...
A Study Of Statistical Models For Query Translation :Finding A Good Unit Of T...A Study Of Statistical Models For Query Translation :Finding A Good Unit Of T...
A Study Of Statistical Models For Query Translation :Finding A Good Unit Of T...
iyo
 
Is Reinforcement Learning (Not) for Natural Language Processing.pdf
Is Reinforcement Learning (Not) for Natural
Language Processing.pdfIs Reinforcement Learning (Not) for Natural
Language Processing.pdf
Is Reinforcement Learning (Not) for Natural Language Processing.pdf
Po-Chuan Chen
 
Chunker Based Sentiment Analysis and Tense Classification for Nepali Text
Chunker Based Sentiment Analysis and Tense Classification for Nepali TextChunker Based Sentiment Analysis and Tense Classification for Nepali Text
Chunker Based Sentiment Analysis and Tense Classification for Nepali Text
kevig
 
Chunker Based Sentiment Analysis and Tense Classification for Nepali Text
Chunker Based Sentiment Analysis and Tense Classification for Nepali TextChunker Based Sentiment Analysis and Tense Classification for Nepali Text
Chunker Based Sentiment Analysis and Tense Classification for Nepali Text
kevig
 

Similaire à Cross-Lingual Sentiment Analysis using modified BRAE (20)

Tiancheng Zhao - 2017 - Learning Discourse-level Diversity for Neural Dialog...
Tiancheng Zhao - 2017 -  Learning Discourse-level Diversity for Neural Dialog...Tiancheng Zhao - 2017 -  Learning Discourse-level Diversity for Neural Dialog...
Tiancheng Zhao - 2017 - Learning Discourse-level Diversity for Neural Dialog...
 
A Study Of Statistical Models For Query Translation :Finding A Good Unit Of T...
A Study Of Statistical Models For Query Translation :Finding A Good Unit Of T...A Study Of Statistical Models For Query Translation :Finding A Good Unit Of T...
A Study Of Statistical Models For Query Translation :Finding A Good Unit Of T...
 
NLP_KASHK:Evaluating Language Model
NLP_KASHK:Evaluating Language ModelNLP_KASHK:Evaluating Language Model
NLP_KASHK:Evaluating Language Model
 
Is Reinforcement Learning (Not) for Natural Language Processing.pdf
Is Reinforcement Learning (Not) for Natural
Language Processing.pdfIs Reinforcement Learning (Not) for Natural
Language Processing.pdf
Is Reinforcement Learning (Not) for Natural Language Processing.pdf
 
UWB semeval2016-task5
UWB semeval2016-task5UWB semeval2016-task5
UWB semeval2016-task5
 
Neural machine translation of rare words with subword units
Neural machine translation of rare words with subword unitsNeural machine translation of rare words with subword units
Neural machine translation of rare words with subword units
 
2-Chapter Two-N-gram Language Models.ppt
2-Chapter Two-N-gram Language Models.ppt2-Chapter Two-N-gram Language Models.ppt
2-Chapter Two-N-gram Language Models.ppt
 
Analyse de sentiment et classification par approche neuronale en Python et Weka
Analyse de sentiment et classification par approche neuronale en Python et WekaAnalyse de sentiment et classification par approche neuronale en Python et Weka
Analyse de sentiment et classification par approche neuronale en Python et Weka
 
CICLing_2016_paper_52
CICLing_2016_paper_52CICLing_2016_paper_52
CICLing_2016_paper_52
 
[Paper Introduction] Translating into Morphologically Rich Languages with Syn...
[Paper Introduction] Translating into Morphologically Rich Languages with Syn...[Paper Introduction] Translating into Morphologically Rich Languages with Syn...
[Paper Introduction] Translating into Morphologically Rich Languages with Syn...
 
2021 04-04-google nmt
2021 04-04-google nmt2021 04-04-google nmt
2021 04-04-google nmt
 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
 
Fast and Accurate Preordering for SMT using Neural Networks
Fast and Accurate Preordering for SMT using Neural NetworksFast and Accurate Preordering for SMT using Neural Networks
Fast and Accurate Preordering for SMT using Neural Networks
 
ACL-WMT2013.A Description of Tunable Machine Translation Evaluation Systems i...
ACL-WMT2013.A Description of Tunable Machine Translation Evaluation Systems i...ACL-WMT2013.A Description of Tunable Machine Translation Evaluation Systems i...
ACL-WMT2013.A Description of Tunable Machine Translation Evaluation Systems i...
 
Open vocabulary problem
Open vocabulary problemOpen vocabulary problem
Open vocabulary problem
 
Natural language processing and transformer models
Natural language processing and transformer modelsNatural language processing and transformer models
Natural language processing and transformer models
 
Chunker Based Sentiment Analysis and Tense Classification for Nepali Text
Chunker Based Sentiment Analysis and Tense Classification for Nepali TextChunker Based Sentiment Analysis and Tense Classification for Nepali Text
Chunker Based Sentiment Analysis and Tense Classification for Nepali Text
 
Chunker Based Sentiment Analysis and Tense Classification for Nepali Text
Chunker Based Sentiment Analysis and Tense Classification for Nepali TextChunker Based Sentiment Analysis and Tense Classification for Nepali Text
Chunker Based Sentiment Analysis and Tense Classification for Nepali Text
 
COLING 2014: Joint Opinion Relation Detection Using One-Class Deep Neural Net...
COLING 2014: Joint Opinion Relation Detection Using One-Class Deep Neural Net...COLING 2014: Joint Opinion Relation Detection Using One-Class Deep Neural Net...
COLING 2014: Joint Opinion Relation Detection Using One-Class Deep Neural Net...
 
Nlp research presentation
Nlp research presentationNlp research presentation
Nlp research presentation
 

Plus de marujirou

Plus de marujirou (9)

Deep Multi-Task Learning with Shared Memory
Deep Multi-Task Learning with Shared MemoryDeep Multi-Task Learning with Shared Memory
Deep Multi-Task Learning with Shared Memory
 
怖くない誤差逆伝播法 Chainerを添えて
怖くない誤差逆伝播法 Chainerを添えて怖くない誤差逆伝播法 Chainerを添えて
怖くない誤差逆伝播法 Chainerを添えて
 
Learning Tag Embeddings and Tag-specific Composition Functions in Recursive N...
Learning Tag Embeddings and Tag-specific Composition Functions in Recursive N...Learning Tag Embeddings and Tag-specific Composition Functions in Recursive N...
Learning Tag Embeddings and Tag-specific Composition Functions in Recursive N...
 
2015 08 survey
2015 08 survey2015 08 survey
2015 08 survey
 
Representation Learning Using Multi-Task Deep Neural Networks
for Semantic Cl...
Representation Learning Using Multi-Task Deep Neural Networks
for Semantic Cl...Representation Learning Using Multi-Task Deep Neural Networks
for Semantic Cl...
Representation Learning Using Multi-Task Deep Neural Networks
for Semantic Cl...
 
Combining Distant and Partial Supervision for Relation Extraction (Angeli et ...
Combining Distant and Partial Supervision for Relation Extraction (Angeli et ...Combining Distant and Partial Supervision for Relation Extraction (Angeli et ...
Combining Distant and Partial Supervision for Relation Extraction (Angeli et ...
 
Semantic Compositionality through Recursive Matrix-Vector Spaces (Socher et al.)
Semantic Compositionality through Recursive Matrix-Vector Spaces (Socher et al.)Semantic Compositionality through Recursive Matrix-Vector Spaces (Socher et al.)
Semantic Compositionality through Recursive Matrix-Vector Spaces (Socher et al.)
 
Relation Classification via Convolutional Deep Neural Network (Zeng et al.)
Relation Classification via Convolutional Deep Neural Network (Zeng et al.)Relation Classification via Convolutional Deep Neural Network (Zeng et al.)
Relation Classification via Convolutional Deep Neural Network (Zeng et al.)
 
DL勉強会 01ディープボルツマンマシン
DL勉強会 01ディープボルツマンマシンDL勉強会 01ディープボルツマンマシン
DL勉強会 01ディープボルツマンマシン
 

Dernier

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 

Dernier (20)

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 

Cross-Lingual Sentiment Analysis using modified BRAE

  • 1. Komachi Lab M1 Ryosuke Miyazaki 2015/10/16 Cross-Lingual Sentiment Analysis using modified BRAE Sarthak Jain and Shashank Batra EMNLP 2015 EMNLP 2015 reading group ※ All figures in this slide are cited from original paper
  • 2. Komachi Lab Abstract ✤ To perform Cross Lingual Sentiment Analysis - They use parallel corpus that include
 resource rich (English) and resource poor (Hindi) ✤ They create new Movie Reviews Dataset in Hindi
 for evaluation ✤ Their model significantly outperforms state of the art,
 especially when labeled data is scarce 2
  • 4. Komachi Lab BRAE Model 4 Bilingually Constrained Recursive Auto-encoder First, we consider standard Recursive Auto-encoder for each language respectively construct parent vector reconstruct children vector Minimize reconstruction errors (Euclidean distance) c: child vector y, p: parent vector
  • 5. Komachi Lab BRAE Model 5 Loss Function They also produce representation from another language Assumption A phrase and its correct translation should
 share the same semantic meaning Loss Function about source language Transforming loss Like wise, they define for target language Objective function
  • 6. Komachi Lab Training (Unsupervised) ✤ Word embeddings are pre-trained by Word2Vec ✤ 1st: Pre-train ps, and pt respectively on RAE 6 ✤ 2nd: Fix pt and train ps on BRAE - Vice-versa for ps - Set ps = p’s, pt = p’t when it reaching a local minima.
  • 7. Komachi Lab Training (Supervise) ✤ Modification for Classifying Sentiment ✤ Adding Softmax and Cross entropy error functions
 to only source language (resource rich language) ✤ In this phase, penalty term is included in reconstruction error 7 ✤ And, transformation weights (θt s, θs t) are not updated in this phase
  • 8. Komachi Lab Training (Supervise) ✤ 1st: only update resource rich related parameters 8 ce: cross entropy ✤ 2nd: only update resource poor related parameters - Since the gold labels are only associated with resource rich,
 they use transformation to obtain sentiment distribution ✤ Predict overall sentiment associated with the resource poor - concat pt, p’s then 
 train by softmax regression using weight matrix
  • 10. Komachi Lab Experimental Settings ✤ HindMonoCorp 0.5 (44.49M sentences) and
 English Gigaword Corpus for word embeddings ✤ Bilingual sentence-aligned data from HindEnCrop (273.9k sentence pairs)
 10 For Unsupervised phase For Supervised phase (use MOSES to obtain bilingual phrase pairs) ✤ IMDB11 dataset (25000 pos, 25000 neg) ✤ Rotten Tomatoes Review dataset (4 documents, {0, 1, 2, 3}) ✤ Their model was able to correctly infer word sense for polysemous words
  • 11. Komachi Lab Experimental Setting ✤ Rating Based Hindi Movie Review Dataset (2945 movie reviews, {1, 2, 3, 4})
 they create this new dataset for evaluation ✤ Standard Movie Reviews Dataset (125 positive, 125 negative) 11 Evaluation Data set ✤ learning rate: 0.05 ✤ word vector dimension: 80 ✤ joint error of BRAE (α): 0.2 ✤ λL: 0.001 ✤ λBRAE: 0.0001 Tuning by Grid Search on Cross Validation ✤ κ: 0.2, η: 0.35 ✤ λp: 0.01 ✤ λS: 0.1 ✤ λT: 0.04
  • 12. Komachi Lab Results ✤ BRAE-U: neither include penalty term, nor fix the transformations weights ✤ BRAE-P: only include the penalty term ✤ BRAE-F: include both term 12 monolingual cross lingual monolingual monolingual monolingual cross lingual cross lingual cross lingual Confusion matrix (BRAE-F)
  • 13. Komachi Lab Results 13 Accuracy with amount of
 labeled training data used ✤ Their model achieve best performance even though
 data are 50% less than those of others. Accuracy with amount of
 unlabeled training data used
  • 14. Komachi Lab Analysis ✤ Since the movement in semantic vector space was restricted, their model have an advantage about unknown words 14 “Her acting of a schizophrenic mother made our hearts weep” base line classify as negative due to “weep”, but their model correctly predict positive Example: ✤ Their model was able to correctly infer word sense for polysemous words
  • 15. Komachi Lab Error Analysis ✤ conflicting sentiments about two different aspects about the same object ✤ presence of subtle contextual references 15 Difficult situation ✤ “His poor acting generally destroys a movie, but this time it didn’t” - correct is positive, predict rate is 2 ✤ “This movie made his last one looked good” - wrong prediction of rating 3 Example of latter case