SlideShare une entreprise Scribd logo
1  sur  57
Télécharger pour lire hors ligne
Affective Analysis and Modeling of Spoken
Dialogue Transcripts
Thesis presentation
Elisavet Palogiannidi
Committee
Alexandros Potamianos (supervisor)
Polychronis Koutsakis (co-supervisor)
Aikaterini Mania
School of Electronic and Computer Engineering
Technical University of Crete
Chania, Crete
11 July 2016
Introduction Affective models Experiments and Results Q&A Conclusions
What if there was no emotion?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 2/49
Introduction Affective models Experiments and Results Q&A Conclusions
What if there was no emotion?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 3/49
Introduction Affective models Experiments and Results Q&A Conclusions
What if there was no emotion?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 4/49
Introduction Affective models Experiments and Results Q&A Conclusions
What if there was no emotion?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 5/49
Introduction Affective models Experiments and Results Q&A Conclusions
What if there was no emotion?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 6/49
Introduction Affective models Experiments and Results Q&A Conclusions
What if there were no computers?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 7/49
Introduction Affective models Experiments and Results Q&A Conclusions
What if there were no computers?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 8/49
Introduction Affective models Experiments and Results Q&A Conclusions
What is the relationship between computers and emotions?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 9/49
Introduction Affective models Experiments and Results Q&A Conclusions
What is all about?
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 10/49
Introduction Affective models Experiments and Results Q&A Conclusions
Outline
1 Introduction
Motivation
Emotion
Contributions
2 Affective models
Semantic Affective Model
Compositional Affective Model
Sentence level Affective Models
3 Experiments and Results
Semantic - Affective model
Compositional Affective Model
Sentence level affective models
4 Q&A
5 Conclusions
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 11/49
Introduction Affective models Experiments and Results Q&A Conclusions
Outline
1 Introduction
Motivation
Emotion
Contributions
2 Affective models
Semantic Affective Model
Compositional Affective Model
Sentence level Affective Models
3 Experiments and Results
Semantic - Affective model
Compositional Affective Model
Sentence level affective models
4 Q&A
5 Conclusions
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 12/49
Introduction Affective models Experiments and Results Q&A Conclusions
Motivation
Emotion detection from text
“Emotion is perceived in text and it can be elicited by its
content and form”
Goal:Assign continuous high quatlity affective scores on
various granularity lexical tokens, using semantic and affective
features, for multiple languages
Motivation: “Semantic similarity implies affective similarity”
Affective text labelling at the core of many applications
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 13/49
Introduction Affective models Experiments and Results Q&A Conclusions
Motivation
Applications
Affective text applications
Sentiment analysis of Social Media, news, product reviews
Emotion detection on spoken dialogue
Multimodal applications
Semantic affective model (SAM) [Malandrakis et al. 2013]
Has been applied to tweets, sms and news headlines
Is applicable to words or n-grams and numerous dimensions
Valence, Arousal, Dominance, Concreteness, Imagability,
Familiarity, Gender Ladenness
We focus on the prediction of Valence, Arousal, Dominance
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 14/49
Introduction Affective models Experiments and Results Q&A Conclusions
Emotion
Continuous Affective space
Introduction
• Goals: 1) Create an emotional resource for the Greek language
2) Use it to automatically estimate affective ratings of words
• Manually created resources have low language coverage (about 1K words)
• Computational models are used to expand manually created affective lexica
Affective (Emotional) Dimensions
Valence Arousal Dominance
Negative to positive Calming to exciting Controlled to controller
Valence-Arousal Distributions Across Languages
• Valence-Arousal distributions for different languages affective lexica
Greek affective lexicon ratings V-shape across languages
0.25
0.5
0.75
1
Arousal
flirtation
treasure
friend
happy
laugh
victory
poster
slave
sadness
pillow
syphilis
anger
commit suicide
failure
−1 −0.5 0 0.5 1
0.25
0.5
0.75
1
0.5
0.75
1
usal
L
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 15/49
Introduction Affective models Experiments and Results Q&A Conclusions
Contributions
Annotated Resources: Greek ANEW
We created the first Greek Affective Lexicon
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 16/49
Introduction Affective models Experiments and Results Q&A Conclusions
Contributions
Models for multiple languages
We extended SAM to multiple languages
We improved the mapping from semantic to affective space
We tried various contextual features and weighting schemes
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 17/49
Introduction Affective models Experiments and Results Q&A Conclusions
Contributions
Compositional Affective models
The meaning of complex lexical structures p is composed by
the meaning of the constituent words α, β
Compositional approaches in vector-based semantics:
Composition of semantic representation of the phrase’s
constituent words
Combine by addition and multiplication [Mitchell and Lapata.,
2008; Mitchell and Lapata, 2010]
.[Baroni and Zamparelli.,2010] compositional approach based
on POS tags
We assume that composition occurs in the affective space,
Combine affective ratings and not semantic representation of
constituent words
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 18/49
Introduction Affective models Experiments and Results Q&A Conclusions
Contributions
Sentiment Analysis in Twitter
We achieved state of the art performance winning a word wide
competition
...................
Semantic Affective system (Baseline)
.
• Tools: POS-tagging, multiword expression, hashtag expan
– Semantic similarity implies affective similarity: SAM “Distr
Semantic Models for Affective Text Analysis, Malandrakis et al. 2013”
• Goal:estimate the affect of word pairs mor
curately than the non-compositional models
• Compositionality: the meaning of the w
is constructed form the meaning of the part
• Novelty: Applied on affective space
• Adopt modifier-head structure: p = m.h
• E.g., : p=“green parrot” and p=“dead par
– m : green/dead & h : parrot
– m modifies the affect of h
Continuous Affective spaces
• Valence - Arousal - Dominance
Semantic Affective Model (SAM
Semantic similarity implies affective similarity
tributional Semantic Models for Affective Text Analysis, Malandrakis et a
ˆυ(tj) = a0 +
N∑
i=1
aiυ(wi)S(tj, wi)
• ˆυ(tj): the affective rating of the unknown t
tj, w1..N: the seeds, υ(wi) and ai: the affe
rating and the weight of wi, a0: the bias,
semantic similarity between tokens
Each modifie
unique bahavi
Applied on
words &
word pairs!
number of seedsaffective rating
of the unknown token
bias
weights
assigned to seeds
Semantic similarity
between tokens
affective ratings
of seeds
• Two step feature selection, Naive Bayes (NB) tree classifi
....
Topic Modeling - based System (TM)
.
• Adapt semantic space on each tweet
• LDA → detect topics (16)→ split corpus →
..............................
In Subtask
is used as f
.
Subtask B
at SemEval 2016 Task 4
Sentiment Analysis in Twitter
using Semantic-Affective Model Ad
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 19/49
Introduction Affective models Experiments and Results Q&A Conclusions
Contributions
Publications
1 Elisavet Palogiannidi, Elias Iosif, Polychronis Koutsakis and Alexandros Potamianos, “Valence, Arousal
and Dominance Estimation for English, German, Greek, Portuguese and Spanish Lexica using Semantic
Models”, in Proceedings of Interspeech, September 2015.
2 Elisavet Palogiannidi, Elias Iosif, Polychronis Koutsakis and Alexandros Potamianos “Affective lexicon
creation for the Greek language”, in Proceedings of the 10th edition of the Language Resources and
Evaluation Conference (LREC) 2016.
3 Elisavet Palogiannidi, Polychronis Koutsakis and Alexandros Potamianos, “A semantic-affective
compositional approach for the affective labelling of adjective-noun and noun-noun pairs”, in Proceedings
of WASSA 2016.
4 Elisavet Palogiannidi, Athanasia Kolovou, Fenia Christopoulou, Filippos Kokkinos, Elias Iosif, Nikolaos
Malandrakis, Harris Papageorgiou , Shrikanth Narayanan and Alexandros Potamianos, “Tweester:
Sentiment analysis in twitter using semantic-affective model adaptation”, in Proceedings of the 10th
International Workshop on Semantic Evaluation (SemEval) 2016.
5 Jose Lopes, Arodami Chorianopoulou, Elisavet Palogiannidi, Helena Moniz, Alberto Abad, Katerina Louka,
Elias Iosif and Aleandros Potamianos “The SpeDial Datasets: Datasets for Spoken Dialogue Systems
Analytics”, in Proceedings of the 10th edition of the Language Resources and Evaluation Conference
(LREC) 2016.
6 Spiros Georgiladakis, Georgia Athanasopoulou, Raveesh Meena, Jose Lopes, Arodami Chorianopoulou,
Elisavet Palogiannidi, Elias Iosif, Gabriel Skantze and Alexandros Potamianos “Root Cause Analysis of
Miscommunication Hotspots in Spoken Dialogue Systems”, in Proceedings of Interspeech 2016 (to appear).
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 20/49
Introduction Affective models Experiments and Results Q&A Conclusions
Outline
1 Introduction
Motivation
Emotion
Contributions
2 Affective models
Semantic Affective Model
Compositional Affective Model
Sentence level Affective Models
3 Experiments and Results
Semantic - Affective model
Compositional Affective Model
Sentence level affective models
4 Q&A
5 Conclusions
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 21/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic Affective Model
Semantic models
Building block for machine learning in NLP
Corpus based approach: Distributional Semantic Models
(DSM)
Semantic information extracted from word frequencies
(co-occurence counts, context vectors)
Context based semantic similarities
“Similarity of context implies similarity of meaning” [Harris ’54]
Contextual windows that contain words or character n-grams
Binary or PPMI weighting scheme
Semantic similarity between two words: cosine of their
contextual feature vectors
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 22/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic Affective Model
From Semantic to Affective Space
Affective model: Extension of [Turney and Littman, 2002],
proposed by [Malandrakis et al. 2013b]
The semantic model is built,
based on the corpus
Training phase for the semantic
to the affective mapping
Affective lexica are used for the
training, e.g., ANEW [Bradley
and Lang 1999]
[Malandrakis et al. 2014]
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 23/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic Affective Model
Affective model [Malandrakis et al. ’13]
Requires a small, manually annotated affective lexicon
Assumption: The affective score of a word can be expressed
as a linear combination of the affective ratings of seed words
weighted by semantic similarity and trainable weights αi
ˆυ(wj ) = α0 +
N
i=1
αi υ(wi )S(wj , wi ) (1)
ˆυ(wj ): estimated affective rating of the unknown word wj
w1..N : seed words
υ(wi ): affective rating of wi (valence, arousal or dominance)
αi : weight assigned to wi (α0: bias)
S(·): semantic similarity between wj and wi
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 24/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic Affective Model
Semantic - affective mapping
Not all seeds are equally salient
Weights estimation (α0 · · · αN) through supervised learning



1 S(w1, w1)υ(w1) · · · S(w1, wN )υ(wN )
1
...
...
...
1 S(wK , w1)υ(w1) · · · S(wK , wN )υ(wN )


 ·



α0
...
αN


 =





1
υ(w1)
.
..
υ(wK )





(2)
A system of K linear equations with N + 1 (N < K) unknown
variables is solved using
Least Squares Estimation (LSE)
Ridge Regression (RR)
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 25/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Compositionality
The meaning of the whole is constructed by the meaning of
the parts
New idea: Applied on affective instead of semantic space
Adopt a modifier-head (m − h) structure for word pairs
Assumption: each modifier has unique behavior that can be
learnt in a distributional approach
e.g., green parrot Vs. dead parrot
modifiers m modify the affective content of h
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 26/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Compositional model (1/2)
The meaning of more complex lexical structures is composed
by the meaning of the constituent words
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 27/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Compositional model (2/2)
The affective content of the word pair is the modified affective
content of the head
ˆυc(p) = β + W ˆυ(h)
β, W are modifier’s bahavior
ˆυ(h) is the affective content of the head
Applied on 1D (W , β are scalars ) and 3D (W ∈ IR3×3
,
β ∈ IR3
) affective spaces
Compositionality measure: Mean Squared Error over training
pairs
Measured between compositional and bigram SAM
High MSE → low compositional model appropriateness
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 28/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Fusion of Compositional and non compositional models
Each word pair has different compositionality degree
Non-compositional models
1 Unigram SAM (U-SAM): average of words’ affective ratings
2 Bigram (B-SAM): apply SAM directly on word pair
Fusion schemes
Average (Avg) and Weighted average
MSE-based :
Estimate λ (pj ) = 0.5
1+e
−MSE(pj ) for each training pair
Average all λ (pj ) to learn the parameter λ(p) of the test pair
Weight compositional (C) and non-compositional (nC) models
based on λ(p), i.e., υφ(p) = λ(p)nC + (1 − λ(p))C
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 29/49
Introduction Affective models Experiments and Results Q&A Conclusions
Sentence level Affective Models
Fusion of words’ affective ratings
Sentence level affective rating approaches
1 Aggregation of the constituent words’ affective ratings
Average
Weighted Average
Maximum absolute affective rating
2 Classification based on affective features
Statistics of words’ affective ratings
POS-tag grouping
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 30/49
Introduction Affective models Experiments and Results Q&A Conclusions
Sentence level Affective Models
Tweester: Semantic affective model system
Two - step feature selection
Naive Bayes tree classifier
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 31/49
Introduction Affective models Experiments and Results Q&A Conclusions
Outline
1 Introduction
Motivation
Emotion
Contributions
2 Affective models
Semantic Affective Model
Compositional Affective Model
Sentence level Affective Models
3 Experiments and Results
Semantic - Affective model
Compositional Affective Model
Sentence level affective models
4 Q&A
5 Conclusions
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 32/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic - Affective model
Experimental Procedure
Goal
Estimate Valence, Arousal and Dominance scores of words in
multiple languages (English, German, Greek, Portuguese, Spanish)
Semantic similarity computation
Words (W) and character n-grams contextual features
Binary (B) and PPMI weighting schemes
Fusion: combine different types of contextual feature vectors
Evaluation datasets
The affective lexica of each language
10-fold cross validation: 90% train and 10% test
Evaluation Metrics: Pearson Correlation, Binary classification
accuracy (positive vs. negative values)
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 33/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic - Affective model
Valence performance as a function of the seeds
Valence correlation and classification accuracy
Performance as a function of the seeds
Valence evaluation of five languages
0 100 200 300 400 500 600
0.65
0.7
0.75
0.8
0.85
0.9
Number of seeds
Correlation
0 100 200 300 400 500 600
0.7
0.75
0.8
0.85
0.9
Number of seeds
ClassificationAccuracy
English Greek German Portuguese Spanish
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 34/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic - Affective model
Comparison of affective dimensions
Valence (a), Arousal (b), Dominance (c) clas. accuracy
0 100 200 300 400 500 600
0.7
0.75
0.8
0.85
0.9
Number of seeds
ClassificationAccuracy
English Greek German Portuguese Spanish
(a)
0 100 200 300 400 500 600
0.65
0.7
0.75
0.8
0.85
0.9
Number of seeds
ClassificationAccuracy
(b)
0 100 200 300 400 500 600
0.65
0.7
0.75
0.8
0.85
0.9
Number of seeds
ClassificationAccuracy
(c)
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 35/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic - Affective model
Comparison of RR and LSE
10 200 400 600 900
0.2
0.4
0.6
0.7
0.8
Number of seeds
Correlation
Arousal
10 200 400 600 900
0.65
0.7
0.75
0.8
Number of seeds
ClassificationAccuracy
Arousal
Spanish − RR Spanish − LSE Greek − LSE Greek − RR
Using RR with the appropriate λ
Performance stays robust for a large number of seeds
RR improves performance of Greek and Spanish on Arousal
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 36/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic - Affective model
Valence classification accuracy for 600 seeds
PPMI works better than binary
Sem. Similarity English Greek Spanish Portuguese German
W-B 86.9 84.3 85.9 89.3 77.1
W-PPMI 90.9 87.6 85.3 90.8 85.2
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 37/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic - Affective model
Valence classification accuracy for 600 seeds
PPMI works better than binary
Character n-grams work equally well with words
Sem. Similarity English Greek Spanish Portuguese German
W-B 86.9 84.3 85.9 89.3 77.1
W-PPMI 90.9 87.6 85.3 90.8 85.2
4gram-PPMI 89.8 87.5 87.7 87.4 82.6
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 37/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic - Affective model
Valence classification accuracy for 600 seeds
PPMI works better than binary
Character n-grams work equally well with words
Concatenating different contextual vectors does not improve
the performance
Sem. Similarity English Greek Spanish Portuguese German
W-B 86.9 84.3 85.9 89.3 77.1
W-PPMI 90.9 87.6 85.3 90.8 85.2
4gram-PPMI 89.8 87.5 87.7 87.4 82.6
W/4gram-PPMI 90.5 87.2 87.9 89.3 83.0
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 37/49
Introduction Affective models Experiments and Results Q&A Conclusions
Semantic - Affective model
Valence classification accuracy for 600 seeds
PPMI works better than binary
Character n-grams work equally well with words
Concatenating different contextual vectors does not improve
the performance
Sem. Similarity English Greek Spanish Portuguese German
W-B 86.9 84.3 85.9 89.3 77.1
W-PPMI 90.9 87.6 85.3 90.8 85.2
4gram-PPMI 89.8 87.5 87.7 87.4 82.6
W/4gram-PPMI 90.5 87.2 87.9 89.3 83.0
Weighting scheme is the most important parameter
English achieves highest performance
German achieves highest performance increase
Char. 4-gram-PPMI works almost always better than W-B
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 37/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Experimental procedure
Goal
Estimate Valence scores of word pairs employing compositional
phenomena
Movie domain word pairs
1009 Adjective Noun (AN) and 357 Noun Noun (NN)
Training corpus: 116M web snippets
Extra training on fusion schemes for weights estimation
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 38/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Classification Accuracy for AN and NN word pairs
U−SAMB−SAM 1D 3D Avg W.Avg MSE−Based
74
76
80
84
86
88
Affective models
ClassificationAccuracy(%)
NN AN Chance − NN Chance − AN
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 39/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Classification Accuracy for AN and NN word pairs
U−SAMB−SAM 1D 3D Avg W.Avg MSE−Based
74
76
80
84
86
88
Affective models
ClassificationAccuracy(%)
NN AN Chance − NN Chance − AN
Compositional models work better than B-SAMs but worse
than U-SAMs
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 39/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Classification Accuracy for AN and NN word pairs
U−SAMB−SAM 1D 3D Avg W.Avg MSE−Based
74
76
80
84
86
88
Affective models
ClassificationAccuracy(%)
NN AN Chance − NN Chance − AN
Compositional models work better than B-SAMs but worse
than U-SAMs
Highest performance achieved for fusion of compositional and
non-compositional models
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 39/49
Introduction Affective models Experiments and Results Q&A Conclusions
Compositional Affective Model
Classification Accuracy for AN and NN word pairs
U−SAMB−SAM 1D 3D Avg W.Avg MSE−Based
74
76
80
84
86
88
Affective models
ClassificationAccuracy(%)
NN AN Chance − NN Chance − AN
Compositional models work better than B-SAMs but worse
than U-SAMs
Highest performance achieved for fusion of compositional and
non-compositional models
Small differences between 1D and 3D models
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 39/49
Introduction Affective models Experiments and Results Q&A Conclusions
Sentence level affective models
Evaluation on News Headlines
Valence estimation of 1000 news headlines aggregating
affective ratings
Affective Model Classification Accuracy (%)
Chance 52.6
Content Words All words
Average 72.4 70.9
Weighted Average 71.6 73.1
Maximum absolute valence 67 66.4
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 40/49
Introduction Affective models Experiments and Results Q&A Conclusions
Sentence level affective models
Evaluation on Movie Subtitles
Valence estimation of movie subtitles from 12 movies
Annotate subtitles on Valence through Crowdsourcing
Leave-one-movie-out scheme
Average performance for all the movies as a function of the
seeds
10 50 100 200 300 400 500 600 700 800
0.5
0.55
0.6
0.65
0.7
Movies subtitles Dataset
Seeds
ClassificationAccuracy
10 50 100 200 300 400 500 600 700 800
0
0.1
0.2
0.3
0.4
Movies subtitles Dataset
Seeds
Correlation
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 41/49
Introduction Affective models Experiments and Results Q&A Conclusions
Outline
1 Introduction
Motivation
Emotion
Contributions
2 Affective models
Semantic Affective Model
Compositional Affective Model
Sentence level Affective Models
3 Experiments and Results
Semantic - Affective model
Compositional Affective Model
Sentence level affective models
4 Q&A
5 Conclusions
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 42/49
Introduction Affective models Experiments and Results Q&A Conclusions
How the sentence level models perform on real data?
Twitter (written text)
Polarity detection task (positive vs. negative tweets)
Classifier with affective features trained on tweets
Evaluation metric: average recall of positive, negative class ρ
System ρ
Baseline 0.821
LYS (Spain 0.791
Amazon 0.784
Spoken Dialogue (transcriptions of speech)
The same utterance can be expressed with different emotion
Affective text models usually don’t work for short utterances
Moderate performance is reached for larger utterances of real
dialogues
Performance improves when fusing with speech system
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 43/49
Introduction Affective models Experiments and Results Q&A Conclusions
How the sentence level models perform on real data?
Twitter (written text)
Polarity detection task (positive vs. negative tweets)
Classifier with affective features trained on tweets
Evaluation metric: average recall of positive, negative class ρ
System ρ
Baseline 0.821
LYS (Spain 0.791
Amazon 0.784
Spoken Dialogue (transcriptions of speech)
The same utterance can be expressed with different emotion
Affective text models usually don’t work for short utterances
Moderate performance is reached for larger utterances of real
dialogues
Performance improves when fusing with speech system
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 43/49
Introduction Affective models Experiments and Results Q&A Conclusions
How the sentence level models perform on real data?
Twitter (written text)
Polarity detection task (positive vs. negative tweets)
Classifier with affective features trained on tweets
Evaluation metric: average recall of positive, negative class ρ
System ρ
Baseline 0.821
LYS (Spain 0.791
Amazon 0.784
Spoken Dialogue (transcriptions of speech)
The same utterance can be expressed with different emotion
Affective text models usually don’t work for short utterances
Moderate performance is reached for larger utterances of real
dialogues
Performance improves when fusing with speech system
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 43/49
Introduction Affective models Experiments and Results Q&A Conclusions
Can SAM be applied on a language with no affective
lexicon lexicon?
1 Create a new affective lexicon
2 Use cross-language modeling
Translate the words of an already existing affective lexicon
Use the other language’s affective ratings
0 100 200 300 400 500 600
0.75
0.8
0.85
0.9
0.95
Seeds
ClassificationAccuracy
S: Greek, T: Portuguese
S: English, T: Portuguese
S: Spanish, T: Portuguese
Portuguese
S: Greek, English, Spanish
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 44/49
Introduction Affective models Experiments and Results Q&A Conclusions
Outline
1 Introduction
Motivation
Emotion
Contributions
2 Affective models
Semantic Affective Model
Compositional Affective Model
Sentence level Affective Models
3 Experiments and Results
Semantic - Affective model
Compositional Affective Model
Sentence level affective models
4 Q&A
5 Conclusions
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 45/49
Introduction Affective models Experiments and Results Q&A Conclusions
Conclusions
Affective models for emotion detection of various granularity
lexical units
We showed that SAM for words
Is language and affective dimension independent
Performance depends on the weights estimation method
We showed that Cross language SAM performs equally well
Compositional models can be applied on affective space
The nature of the written data determines the performance of
the sentence level model
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 46/49
Introduction Affective models Experiments and Results Q&A Conclusions
Future work
Identify parameters that define compositionality
Employ compositional semantics on compositional model
Ambiguous interaction between the words of the word pair
Incorporate morphological information in different languages’
SAMs
Compositional models for sentences
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 47/49
Introduction Affective models Experiments and Results Q&A Conclusions
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 48/49
Introduction Affective models Experiments and Results Q&A Conclusions
References
Malandrakis et al. 2013 N. Malandrakis, A. Potamianos, E. Iosif and S. Narayanan. 2013. “Distributional
Semantic Models for Affective Text Analysis”. IEEE Transactions on Audio, Speech and Language Processing. 2013
Malandrakis et al. 2014 N.Malandrakis, A. Potamianos, K. J. Hsu , K. N. Babeva, M. C. Feng , G. C. Davison , S.
Narayanan, 2014 “Affective Language Model Adaptation Via Corpus Selection”, ICASSP 2014
Turney and Littman 2002 P. Turney and M. L. Littman, “Unsupervised Learning of Semantic Orientation from a
Hundred-Billion-Word Corpus. Technical report ERC-1094 (NRC 44929),” National Research. Council of Canada,
2002.
Mitchell, J., and Lapata 2008 J. Mitchell and M. Lapata. Vector-based models of semantic composition. In Proc.
of (ACL), pages 236244. 2008
Mitchell, J., and Lapata 2010 Mitchell, J., and Lapata, M. Composition in distributional models of semantics.
Cognitive science 34, 8 (2010)
Baroni and Zamparelli 2010 Baroni, M., and Zamparelli., R. Nouns are vectors, adjectives are matrices:
Representing adjective-noun constructions in semantic space. In in Proc. of EMNLP (2010).
Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 49/49

Contenu connexe

Tendances

32. group presentation project
32. group presentation project32. group presentation project
32. group presentation projectIECP
 
Partial Models: Towards Modeling and Reasoning with Uncertainty
Partial Models: Towards Modeling and Reasoning with UncertaintyPartial Models: Towards Modeling and Reasoning with Uncertainty
Partial Models: Towards Modeling and Reasoning with UncertaintyMichalis Famelis
 
Test adaptation by sumiran khatri
Test adaptation by sumiran khatriTest adaptation by sumiran khatri
Test adaptation by sumiran khatriSumiran Khatri
 
NLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLPNLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLPAnuj Gupta
 
Sentiment Analysis in Twitter with Lightweight Discourse Analysis
Sentiment Analysis in Twitter with Lightweight Discourse AnalysisSentiment Analysis in Twitter with Lightweight Discourse Analysis
Sentiment Analysis in Twitter with Lightweight Discourse AnalysisSubhabrata Mukherjee
 
Psychological test adaptation
Psychological test adaptationPsychological test adaptation
Psychological test adaptationCarlo Magno
 
Guideline test adaptation
Guideline test adaptationGuideline test adaptation
Guideline test adaptationjuliew83
 
Recent Advances in NLP
  Recent Advances in NLP  Recent Advances in NLP
Recent Advances in NLPAnuj Gupta
 
PRONOUN DISAMBIGUATION: WITH APPLICATION TO THE WINOGRAD SCHEMA CHALLENGE
PRONOUN DISAMBIGUATION: WITH APPLICATION TO THE WINOGRAD SCHEMA CHALLENGEPRONOUN DISAMBIGUATION: WITH APPLICATION TO THE WINOGRAD SCHEMA CHALLENGE
PRONOUN DISAMBIGUATION: WITH APPLICATION TO THE WINOGRAD SCHEMA CHALLENGEkevig
 
Anthiil Inside workshop on NLP
Anthiil Inside workshop on NLPAnthiil Inside workshop on NLP
Anthiil Inside workshop on NLPSatyam Saxena
 
Dialogue based Meaning Negotiation
Dialogue based Meaning NegotiationDialogue based Meaning Negotiation
Dialogue based Meaning NegotiationTerry Payne
 

Tendances (14)

32. group presentation project
32. group presentation project32. group presentation project
32. group presentation project
 
Partial Models: Towards Modeling and Reasoning with Uncertainty
Partial Models: Towards Modeling and Reasoning with UncertaintyPartial Models: Towards Modeling and Reasoning with Uncertainty
Partial Models: Towards Modeling and Reasoning with Uncertainty
 
Test adaptation by sumiran khatri
Test adaptation by sumiran khatriTest adaptation by sumiran khatri
Test adaptation by sumiran khatri
 
NLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLPNLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLP
 
Sentiment Analysis in Twitter with Lightweight Discourse Analysis
Sentiment Analysis in Twitter with Lightweight Discourse AnalysisSentiment Analysis in Twitter with Lightweight Discourse Analysis
Sentiment Analysis in Twitter with Lightweight Discourse Analysis
 
MSc Presentation
MSc PresentationMSc Presentation
MSc Presentation
 
Psychological test adaptation
Psychological test adaptationPsychological test adaptation
Psychological test adaptation
 
Guideline test adaptation
Guideline test adaptationGuideline test adaptation
Guideline test adaptation
 
Recent Advances in NLP
  Recent Advances in NLP  Recent Advances in NLP
Recent Advances in NLP
 
PRONOUN DISAMBIGUATION: WITH APPLICATION TO THE WINOGRAD SCHEMA CHALLENGE
PRONOUN DISAMBIGUATION: WITH APPLICATION TO THE WINOGRAD SCHEMA CHALLENGEPRONOUN DISAMBIGUATION: WITH APPLICATION TO THE WINOGRAD SCHEMA CHALLENGE
PRONOUN DISAMBIGUATION: WITH APPLICATION TO THE WINOGRAD SCHEMA CHALLENGE
 
NLP Bootcamp
NLP BootcampNLP Bootcamp
NLP Bootcamp
 
Anthiil Inside workshop on NLP
Anthiil Inside workshop on NLPAnthiil Inside workshop on NLP
Anthiil Inside workshop on NLP
 
Dialogue based Meaning Negotiation
Dialogue based Meaning NegotiationDialogue based Meaning Negotiation
Dialogue based Meaning Negotiation
 
Test adaptation
Test adaptationTest adaptation
Test adaptation
 

En vedette

青云虚拟机部署私有Docker Registry
青云虚拟机部署私有Docker Registry青云虚拟机部署私有Docker Registry
青云虚拟机部署私有Docker RegistryZhichao Liang
 
微软Bot framework简介
微软Bot framework简介微软Bot framework简介
微软Bot framework简介Zhichao Liang
 
An Intelligent Assistant for High-Level Task Understanding
An Intelligent Assistant for High-Level Task UnderstandingAn Intelligent Assistant for High-Level Task Understanding
An Intelligent Assistant for High-Level Task UnderstandingYun-Nung (Vivian) Chen
 
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Lan...
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Lan...End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Lan...
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Lan...Yun-Nung (Vivian) Chen
 
Statistical Learning from Dialogues for Intelligent Assistants
Statistical Learning from Dialogues for Intelligent AssistantsStatistical Learning from Dialogues for Intelligent Assistants
Statistical Learning from Dialogues for Intelligent AssistantsYun-Nung (Vivian) Chen
 
Cascon 2016 Keynote: Disrupting Developer Productivity One Bot at a Time
Cascon 2016 Keynote: Disrupting Developer Productivity One Bot at a TimeCascon 2016 Keynote: Disrupting Developer Productivity One Bot at a Time
Cascon 2016 Keynote: Disrupting Developer Productivity One Bot at a TimeMargaret-Anne Storey
 
Harm van Seijen, Research Scientist, Maluuba at MLconf SF 2016
Harm van Seijen, Research Scientist, Maluuba at MLconf SF 2016Harm van Seijen, Research Scientist, Maluuba at MLconf SF 2016
Harm van Seijen, Research Scientist, Maluuba at MLconf SF 2016MLconf
 
End-to-End Joint Learning of Natural Language Understanding and Dialogue Manager
End-to-End Joint Learning of Natural Language Understanding and Dialogue ManagerEnd-to-End Joint Learning of Natural Language Understanding and Dialogue Manager
End-to-End Joint Learning of Natural Language Understanding and Dialogue ManagerYun-Nung (Vivian) Chen
 

En vedette (8)

青云虚拟机部署私有Docker Registry
青云虚拟机部署私有Docker Registry青云虚拟机部署私有Docker Registry
青云虚拟机部署私有Docker Registry
 
微软Bot framework简介
微软Bot framework简介微软Bot framework简介
微软Bot framework简介
 
An Intelligent Assistant for High-Level Task Understanding
An Intelligent Assistant for High-Level Task UnderstandingAn Intelligent Assistant for High-Level Task Understanding
An Intelligent Assistant for High-Level Task Understanding
 
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Lan...
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Lan...End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Lan...
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Lan...
 
Statistical Learning from Dialogues for Intelligent Assistants
Statistical Learning from Dialogues for Intelligent AssistantsStatistical Learning from Dialogues for Intelligent Assistants
Statistical Learning from Dialogues for Intelligent Assistants
 
Cascon 2016 Keynote: Disrupting Developer Productivity One Bot at a Time
Cascon 2016 Keynote: Disrupting Developer Productivity One Bot at a TimeCascon 2016 Keynote: Disrupting Developer Productivity One Bot at a Time
Cascon 2016 Keynote: Disrupting Developer Productivity One Bot at a Time
 
Harm van Seijen, Research Scientist, Maluuba at MLconf SF 2016
Harm van Seijen, Research Scientist, Maluuba at MLconf SF 2016Harm van Seijen, Research Scientist, Maluuba at MLconf SF 2016
Harm van Seijen, Research Scientist, Maluuba at MLconf SF 2016
 
End-to-End Joint Learning of Natural Language Understanding and Dialogue Manager
End-to-End Joint Learning of Natural Language Understanding and Dialogue ManagerEnd-to-End Joint Learning of Natural Language Understanding and Dialogue Manager
End-to-End Joint Learning of Natural Language Understanding and Dialogue Manager
 

Similaire à thesis_palogiannidi

Interpersonal speaking presentation v4.8 redacted
Interpersonal speaking presentation v4.8 redactedInterpersonal speaking presentation v4.8 redacted
Interpersonal speaking presentation v4.8 redactedBellevue School District
 
Step by step stylistic analysis
Step by step stylistic analysisStep by step stylistic analysis
Step by step stylistic analysisWaldorf Oberberg
 
Tracking Learning: Using Corpus Linguistics to Assess Language Development
Tracking Learning: Using Corpus Linguistics to Assess Language DevelopmentTracking Learning: Using Corpus Linguistics to Assess Language Development
Tracking Learning: Using Corpus Linguistics to Assess Language DevelopmentCALPER
 
Vl3.lab presentation
Vl3.lab presentationVl3.lab presentation
Vl3.lab presentationCameliaN
 
Using ICT to Analyse Language
Using ICT to Analyse LanguageUsing ICT to Analyse Language
Using ICT to Analyse LanguageEka Nathiqo
 
Vl3.culture plex presentation
Vl3.culture plex presentationVl3.culture plex presentation
Vl3.culture plex presentationCameliaN
 
Vl3.cultureplex presentation
Vl3.cultureplex presentationVl3.cultureplex presentation
Vl3.cultureplex presentationCameliaN
 
Vl3.culture plex presentation
Vl3.culture plex presentationVl3.culture plex presentation
Vl3.culture plex presentationCameliaN
 
5810 oral lang anly transcr wkshp (fall 2014) pdf
5810 oral lang anly transcr wkshp (fall 2014) pdf  5810 oral lang anly transcr wkshp (fall 2014) pdf
5810 oral lang anly transcr wkshp (fall 2014) pdf SVTaylor123
 
Improving Communications With Soft Skill And Dialogue Simulations
Improving Communications With Soft Skill And Dialogue SimulationsImproving Communications With Soft Skill And Dialogue Simulations
Improving Communications With Soft Skill And Dialogue SimulationsEnspire Learning
 
Liberty university coms 101 quiz 4 complete solutions correct answers key
Liberty university coms 101 quiz 4 complete solutions correct answers keyLiberty university coms 101 quiz 4 complete solutions correct answers key
Liberty university coms 101 quiz 4 complete solutions correct answers keySong Love
 
Gadgets pwn us? A pattern language for CALL
Gadgets pwn us? A pattern language for CALLGadgets pwn us? A pattern language for CALL
Gadgets pwn us? A pattern language for CALLLawrie Hunter
 
VCE English Exam Section C Prep
VCE English Exam Section C PrepVCE English Exam Section C Prep
VCE English Exam Section C PrepAmy Gallacher
 
Experiments on Pattern-based Ontology Design
Experiments on Pattern-based Ontology DesignExperiments on Pattern-based Ontology Design
Experiments on Pattern-based Ontology Designevabl444
 
Assessment of oral skills roleplay
Assessment of oral skills roleplayAssessment of oral skills roleplay
Assessment of oral skills roleplayLourdes Pomposo
 
5 generations a lx
5 generations a lx5 generations a lx
5 generations a lxedac4co
 
5810 day 3 sept 20 2014
5810 day 3 sept 20 2014 5810 day 3 sept 20 2014
5810 day 3 sept 20 2014 SVTaylor123
 

Similaire à thesis_palogiannidi (20)

Interpersonal speaking presentation v4.8 redacted
Interpersonal speaking presentation v4.8 redactedInterpersonal speaking presentation v4.8 redacted
Interpersonal speaking presentation v4.8 redacted
 
Presentation ...prose
Presentation ...prosePresentation ...prose
Presentation ...prose
 
Step by step stylistic analysis
Step by step stylistic analysisStep by step stylistic analysis
Step by step stylistic analysis
 
Tracking Learning: Using Corpus Linguistics to Assess Language Development
Tracking Learning: Using Corpus Linguistics to Assess Language DevelopmentTracking Learning: Using Corpus Linguistics to Assess Language Development
Tracking Learning: Using Corpus Linguistics to Assess Language Development
 
Vl3.lab presentation
Vl3.lab presentationVl3.lab presentation
Vl3.lab presentation
 
Using ICT to Analyse Language
Using ICT to Analyse LanguageUsing ICT to Analyse Language
Using ICT to Analyse Language
 
Vl3.culture plex presentation
Vl3.culture plex presentationVl3.culture plex presentation
Vl3.culture plex presentation
 
Vl3.cultureplex presentation
Vl3.cultureplex presentationVl3.cultureplex presentation
Vl3.cultureplex presentation
 
Vl3.culture plex presentation
Vl3.culture plex presentationVl3.culture plex presentation
Vl3.culture plex presentation
 
5810 oral lang anly transcr wkshp (fall 2014) pdf
5810 oral lang anly transcr wkshp (fall 2014) pdf  5810 oral lang anly transcr wkshp (fall 2014) pdf
5810 oral lang anly transcr wkshp (fall 2014) pdf
 
NLP
NLPNLP
NLP
 
Improving Communications With Soft Skill And Dialogue Simulations
Improving Communications With Soft Skill And Dialogue SimulationsImproving Communications With Soft Skill And Dialogue Simulations
Improving Communications With Soft Skill And Dialogue Simulations
 
Liberty university coms 101 quiz 4 complete solutions correct answers key
Liberty university coms 101 quiz 4 complete solutions correct answers keyLiberty university coms 101 quiz 4 complete solutions correct answers key
Liberty university coms 101 quiz 4 complete solutions correct answers key
 
Gadgets pwn us? A pattern language for CALL
Gadgets pwn us? A pattern language for CALLGadgets pwn us? A pattern language for CALL
Gadgets pwn us? A pattern language for CALL
 
NLP
NLPNLP
NLP
 
VCE English Exam Section C Prep
VCE English Exam Section C PrepVCE English Exam Section C Prep
VCE English Exam Section C Prep
 
Experiments on Pattern-based Ontology Design
Experiments on Pattern-based Ontology DesignExperiments on Pattern-based Ontology Design
Experiments on Pattern-based Ontology Design
 
Assessment of oral skills roleplay
Assessment of oral skills roleplayAssessment of oral skills roleplay
Assessment of oral skills roleplay
 
5 generations a lx
5 generations a lx5 generations a lx
5 generations a lx
 
5810 day 3 sept 20 2014
5810 day 3 sept 20 2014 5810 day 3 sept 20 2014
5810 day 3 sept 20 2014
 

thesis_palogiannidi

  • 1. Affective Analysis and Modeling of Spoken Dialogue Transcripts Thesis presentation Elisavet Palogiannidi Committee Alexandros Potamianos (supervisor) Polychronis Koutsakis (co-supervisor) Aikaterini Mania School of Electronic and Computer Engineering Technical University of Crete Chania, Crete 11 July 2016
  • 2. Introduction Affective models Experiments and Results Q&A Conclusions What if there was no emotion? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 2/49
  • 3. Introduction Affective models Experiments and Results Q&A Conclusions What if there was no emotion? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 3/49
  • 4. Introduction Affective models Experiments and Results Q&A Conclusions What if there was no emotion? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 4/49
  • 5. Introduction Affective models Experiments and Results Q&A Conclusions What if there was no emotion? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 5/49
  • 6. Introduction Affective models Experiments and Results Q&A Conclusions What if there was no emotion? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 6/49
  • 7. Introduction Affective models Experiments and Results Q&A Conclusions What if there were no computers? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 7/49
  • 8. Introduction Affective models Experiments and Results Q&A Conclusions What if there were no computers? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 8/49
  • 9. Introduction Affective models Experiments and Results Q&A Conclusions What is the relationship between computers and emotions? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 9/49
  • 10. Introduction Affective models Experiments and Results Q&A Conclusions What is all about? Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 10/49
  • 11. Introduction Affective models Experiments and Results Q&A Conclusions Outline 1 Introduction Motivation Emotion Contributions 2 Affective models Semantic Affective Model Compositional Affective Model Sentence level Affective Models 3 Experiments and Results Semantic - Affective model Compositional Affective Model Sentence level affective models 4 Q&A 5 Conclusions Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 11/49
  • 12. Introduction Affective models Experiments and Results Q&A Conclusions Outline 1 Introduction Motivation Emotion Contributions 2 Affective models Semantic Affective Model Compositional Affective Model Sentence level Affective Models 3 Experiments and Results Semantic - Affective model Compositional Affective Model Sentence level affective models 4 Q&A 5 Conclusions Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 12/49
  • 13. Introduction Affective models Experiments and Results Q&A Conclusions Motivation Emotion detection from text “Emotion is perceived in text and it can be elicited by its content and form” Goal:Assign continuous high quatlity affective scores on various granularity lexical tokens, using semantic and affective features, for multiple languages Motivation: “Semantic similarity implies affective similarity” Affective text labelling at the core of many applications Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 13/49
  • 14. Introduction Affective models Experiments and Results Q&A Conclusions Motivation Applications Affective text applications Sentiment analysis of Social Media, news, product reviews Emotion detection on spoken dialogue Multimodal applications Semantic affective model (SAM) [Malandrakis et al. 2013] Has been applied to tweets, sms and news headlines Is applicable to words or n-grams and numerous dimensions Valence, Arousal, Dominance, Concreteness, Imagability, Familiarity, Gender Ladenness We focus on the prediction of Valence, Arousal, Dominance Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 14/49
  • 15. Introduction Affective models Experiments and Results Q&A Conclusions Emotion Continuous Affective space Introduction • Goals: 1) Create an emotional resource for the Greek language 2) Use it to automatically estimate affective ratings of words • Manually created resources have low language coverage (about 1K words) • Computational models are used to expand manually created affective lexica Affective (Emotional) Dimensions Valence Arousal Dominance Negative to positive Calming to exciting Controlled to controller Valence-Arousal Distributions Across Languages • Valence-Arousal distributions for different languages affective lexica Greek affective lexicon ratings V-shape across languages 0.25 0.5 0.75 1 Arousal flirtation treasure friend happy laugh victory poster slave sadness pillow syphilis anger commit suicide failure −1 −0.5 0 0.5 1 0.25 0.5 0.75 1 0.5 0.75 1 usal L Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 15/49
  • 16. Introduction Affective models Experiments and Results Q&A Conclusions Contributions Annotated Resources: Greek ANEW We created the first Greek Affective Lexicon Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 16/49
  • 17. Introduction Affective models Experiments and Results Q&A Conclusions Contributions Models for multiple languages We extended SAM to multiple languages We improved the mapping from semantic to affective space We tried various contextual features and weighting schemes Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 17/49
  • 18. Introduction Affective models Experiments and Results Q&A Conclusions Contributions Compositional Affective models The meaning of complex lexical structures p is composed by the meaning of the constituent words α, β Compositional approaches in vector-based semantics: Composition of semantic representation of the phrase’s constituent words Combine by addition and multiplication [Mitchell and Lapata., 2008; Mitchell and Lapata, 2010] .[Baroni and Zamparelli.,2010] compositional approach based on POS tags We assume that composition occurs in the affective space, Combine affective ratings and not semantic representation of constituent words Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 18/49
  • 19. Introduction Affective models Experiments and Results Q&A Conclusions Contributions Sentiment Analysis in Twitter We achieved state of the art performance winning a word wide competition ................... Semantic Affective system (Baseline) . • Tools: POS-tagging, multiword expression, hashtag expan – Semantic similarity implies affective similarity: SAM “Distr Semantic Models for Affective Text Analysis, Malandrakis et al. 2013” • Goal:estimate the affect of word pairs mor curately than the non-compositional models • Compositionality: the meaning of the w is constructed form the meaning of the part • Novelty: Applied on affective space • Adopt modifier-head structure: p = m.h • E.g., : p=“green parrot” and p=“dead par – m : green/dead & h : parrot – m modifies the affect of h Continuous Affective spaces • Valence - Arousal - Dominance Semantic Affective Model (SAM Semantic similarity implies affective similarity tributional Semantic Models for Affective Text Analysis, Malandrakis et a ˆυ(tj) = a0 + N∑ i=1 aiυ(wi)S(tj, wi) • ˆυ(tj): the affective rating of the unknown t tj, w1..N: the seeds, υ(wi) and ai: the affe rating and the weight of wi, a0: the bias, semantic similarity between tokens Each modifie unique bahavi Applied on words & word pairs! number of seedsaffective rating of the unknown token bias weights assigned to seeds Semantic similarity between tokens affective ratings of seeds • Two step feature selection, Naive Bayes (NB) tree classifi .... Topic Modeling - based System (TM) . • Adapt semantic space on each tweet • LDA → detect topics (16)→ split corpus → .............................. In Subtask is used as f . Subtask B at SemEval 2016 Task 4 Sentiment Analysis in Twitter using Semantic-Affective Model Ad Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 19/49
  • 20. Introduction Affective models Experiments and Results Q&A Conclusions Contributions Publications 1 Elisavet Palogiannidi, Elias Iosif, Polychronis Koutsakis and Alexandros Potamianos, “Valence, Arousal and Dominance Estimation for English, German, Greek, Portuguese and Spanish Lexica using Semantic Models”, in Proceedings of Interspeech, September 2015. 2 Elisavet Palogiannidi, Elias Iosif, Polychronis Koutsakis and Alexandros Potamianos “Affective lexicon creation for the Greek language”, in Proceedings of the 10th edition of the Language Resources and Evaluation Conference (LREC) 2016. 3 Elisavet Palogiannidi, Polychronis Koutsakis and Alexandros Potamianos, “A semantic-affective compositional approach for the affective labelling of adjective-noun and noun-noun pairs”, in Proceedings of WASSA 2016. 4 Elisavet Palogiannidi, Athanasia Kolovou, Fenia Christopoulou, Filippos Kokkinos, Elias Iosif, Nikolaos Malandrakis, Harris Papageorgiou , Shrikanth Narayanan and Alexandros Potamianos, “Tweester: Sentiment analysis in twitter using semantic-affective model adaptation”, in Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval) 2016. 5 Jose Lopes, Arodami Chorianopoulou, Elisavet Palogiannidi, Helena Moniz, Alberto Abad, Katerina Louka, Elias Iosif and Aleandros Potamianos “The SpeDial Datasets: Datasets for Spoken Dialogue Systems Analytics”, in Proceedings of the 10th edition of the Language Resources and Evaluation Conference (LREC) 2016. 6 Spiros Georgiladakis, Georgia Athanasopoulou, Raveesh Meena, Jose Lopes, Arodami Chorianopoulou, Elisavet Palogiannidi, Elias Iosif, Gabriel Skantze and Alexandros Potamianos “Root Cause Analysis of Miscommunication Hotspots in Spoken Dialogue Systems”, in Proceedings of Interspeech 2016 (to appear). Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 20/49
  • 21. Introduction Affective models Experiments and Results Q&A Conclusions Outline 1 Introduction Motivation Emotion Contributions 2 Affective models Semantic Affective Model Compositional Affective Model Sentence level Affective Models 3 Experiments and Results Semantic - Affective model Compositional Affective Model Sentence level affective models 4 Q&A 5 Conclusions Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 21/49
  • 22. Introduction Affective models Experiments and Results Q&A Conclusions Semantic Affective Model Semantic models Building block for machine learning in NLP Corpus based approach: Distributional Semantic Models (DSM) Semantic information extracted from word frequencies (co-occurence counts, context vectors) Context based semantic similarities “Similarity of context implies similarity of meaning” [Harris ’54] Contextual windows that contain words or character n-grams Binary or PPMI weighting scheme Semantic similarity between two words: cosine of their contextual feature vectors Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 22/49
  • 23. Introduction Affective models Experiments and Results Q&A Conclusions Semantic Affective Model From Semantic to Affective Space Affective model: Extension of [Turney and Littman, 2002], proposed by [Malandrakis et al. 2013b] The semantic model is built, based on the corpus Training phase for the semantic to the affective mapping Affective lexica are used for the training, e.g., ANEW [Bradley and Lang 1999] [Malandrakis et al. 2014] Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 23/49
  • 24. Introduction Affective models Experiments and Results Q&A Conclusions Semantic Affective Model Affective model [Malandrakis et al. ’13] Requires a small, manually annotated affective lexicon Assumption: The affective score of a word can be expressed as a linear combination of the affective ratings of seed words weighted by semantic similarity and trainable weights αi ˆυ(wj ) = α0 + N i=1 αi υ(wi )S(wj , wi ) (1) ˆυ(wj ): estimated affective rating of the unknown word wj w1..N : seed words υ(wi ): affective rating of wi (valence, arousal or dominance) αi : weight assigned to wi (α0: bias) S(·): semantic similarity between wj and wi Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 24/49
  • 25. Introduction Affective models Experiments and Results Q&A Conclusions Semantic Affective Model Semantic - affective mapping Not all seeds are equally salient Weights estimation (α0 · · · αN) through supervised learning    1 S(w1, w1)υ(w1) · · · S(w1, wN )υ(wN ) 1 ... ... ... 1 S(wK , w1)υ(w1) · · · S(wK , wN )υ(wN )    ·    α0 ... αN    =      1 υ(w1) . .. υ(wK )      (2) A system of K linear equations with N + 1 (N < K) unknown variables is solved using Least Squares Estimation (LSE) Ridge Regression (RR) Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 25/49
  • 26. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Compositionality The meaning of the whole is constructed by the meaning of the parts New idea: Applied on affective instead of semantic space Adopt a modifier-head (m − h) structure for word pairs Assumption: each modifier has unique behavior that can be learnt in a distributional approach e.g., green parrot Vs. dead parrot modifiers m modify the affective content of h Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 26/49
  • 27. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Compositional model (1/2) The meaning of more complex lexical structures is composed by the meaning of the constituent words Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 27/49
  • 28. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Compositional model (2/2) The affective content of the word pair is the modified affective content of the head ˆυc(p) = β + W ˆυ(h) β, W are modifier’s bahavior ˆυ(h) is the affective content of the head Applied on 1D (W , β are scalars ) and 3D (W ∈ IR3×3 , β ∈ IR3 ) affective spaces Compositionality measure: Mean Squared Error over training pairs Measured between compositional and bigram SAM High MSE → low compositional model appropriateness Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 28/49
  • 29. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Fusion of Compositional and non compositional models Each word pair has different compositionality degree Non-compositional models 1 Unigram SAM (U-SAM): average of words’ affective ratings 2 Bigram (B-SAM): apply SAM directly on word pair Fusion schemes Average (Avg) and Weighted average MSE-based : Estimate λ (pj ) = 0.5 1+e −MSE(pj ) for each training pair Average all λ (pj ) to learn the parameter λ(p) of the test pair Weight compositional (C) and non-compositional (nC) models based on λ(p), i.e., υφ(p) = λ(p)nC + (1 − λ(p))C Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 29/49
  • 30. Introduction Affective models Experiments and Results Q&A Conclusions Sentence level Affective Models Fusion of words’ affective ratings Sentence level affective rating approaches 1 Aggregation of the constituent words’ affective ratings Average Weighted Average Maximum absolute affective rating 2 Classification based on affective features Statistics of words’ affective ratings POS-tag grouping Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 30/49
  • 31. Introduction Affective models Experiments and Results Q&A Conclusions Sentence level Affective Models Tweester: Semantic affective model system Two - step feature selection Naive Bayes tree classifier Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 31/49
  • 32. Introduction Affective models Experiments and Results Q&A Conclusions Outline 1 Introduction Motivation Emotion Contributions 2 Affective models Semantic Affective Model Compositional Affective Model Sentence level Affective Models 3 Experiments and Results Semantic - Affective model Compositional Affective Model Sentence level affective models 4 Q&A 5 Conclusions Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 32/49
  • 33. Introduction Affective models Experiments and Results Q&A Conclusions Semantic - Affective model Experimental Procedure Goal Estimate Valence, Arousal and Dominance scores of words in multiple languages (English, German, Greek, Portuguese, Spanish) Semantic similarity computation Words (W) and character n-grams contextual features Binary (B) and PPMI weighting schemes Fusion: combine different types of contextual feature vectors Evaluation datasets The affective lexica of each language 10-fold cross validation: 90% train and 10% test Evaluation Metrics: Pearson Correlation, Binary classification accuracy (positive vs. negative values) Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 33/49
  • 34. Introduction Affective models Experiments and Results Q&A Conclusions Semantic - Affective model Valence performance as a function of the seeds Valence correlation and classification accuracy Performance as a function of the seeds Valence evaluation of five languages 0 100 200 300 400 500 600 0.65 0.7 0.75 0.8 0.85 0.9 Number of seeds Correlation 0 100 200 300 400 500 600 0.7 0.75 0.8 0.85 0.9 Number of seeds ClassificationAccuracy English Greek German Portuguese Spanish Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 34/49
  • 35. Introduction Affective models Experiments and Results Q&A Conclusions Semantic - Affective model Comparison of affective dimensions Valence (a), Arousal (b), Dominance (c) clas. accuracy 0 100 200 300 400 500 600 0.7 0.75 0.8 0.85 0.9 Number of seeds ClassificationAccuracy English Greek German Portuguese Spanish (a) 0 100 200 300 400 500 600 0.65 0.7 0.75 0.8 0.85 0.9 Number of seeds ClassificationAccuracy (b) 0 100 200 300 400 500 600 0.65 0.7 0.75 0.8 0.85 0.9 Number of seeds ClassificationAccuracy (c) Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 35/49
  • 36. Introduction Affective models Experiments and Results Q&A Conclusions Semantic - Affective model Comparison of RR and LSE 10 200 400 600 900 0.2 0.4 0.6 0.7 0.8 Number of seeds Correlation Arousal 10 200 400 600 900 0.65 0.7 0.75 0.8 Number of seeds ClassificationAccuracy Arousal Spanish − RR Spanish − LSE Greek − LSE Greek − RR Using RR with the appropriate λ Performance stays robust for a large number of seeds RR improves performance of Greek and Spanish on Arousal Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 36/49
  • 37. Introduction Affective models Experiments and Results Q&A Conclusions Semantic - Affective model Valence classification accuracy for 600 seeds PPMI works better than binary Sem. Similarity English Greek Spanish Portuguese German W-B 86.9 84.3 85.9 89.3 77.1 W-PPMI 90.9 87.6 85.3 90.8 85.2 Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 37/49
  • 38. Introduction Affective models Experiments and Results Q&A Conclusions Semantic - Affective model Valence classification accuracy for 600 seeds PPMI works better than binary Character n-grams work equally well with words Sem. Similarity English Greek Spanish Portuguese German W-B 86.9 84.3 85.9 89.3 77.1 W-PPMI 90.9 87.6 85.3 90.8 85.2 4gram-PPMI 89.8 87.5 87.7 87.4 82.6 Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 37/49
  • 39. Introduction Affective models Experiments and Results Q&A Conclusions Semantic - Affective model Valence classification accuracy for 600 seeds PPMI works better than binary Character n-grams work equally well with words Concatenating different contextual vectors does not improve the performance Sem. Similarity English Greek Spanish Portuguese German W-B 86.9 84.3 85.9 89.3 77.1 W-PPMI 90.9 87.6 85.3 90.8 85.2 4gram-PPMI 89.8 87.5 87.7 87.4 82.6 W/4gram-PPMI 90.5 87.2 87.9 89.3 83.0 Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 37/49
  • 40. Introduction Affective models Experiments and Results Q&A Conclusions Semantic - Affective model Valence classification accuracy for 600 seeds PPMI works better than binary Character n-grams work equally well with words Concatenating different contextual vectors does not improve the performance Sem. Similarity English Greek Spanish Portuguese German W-B 86.9 84.3 85.9 89.3 77.1 W-PPMI 90.9 87.6 85.3 90.8 85.2 4gram-PPMI 89.8 87.5 87.7 87.4 82.6 W/4gram-PPMI 90.5 87.2 87.9 89.3 83.0 Weighting scheme is the most important parameter English achieves highest performance German achieves highest performance increase Char. 4-gram-PPMI works almost always better than W-B Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 37/49
  • 41. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Experimental procedure Goal Estimate Valence scores of word pairs employing compositional phenomena Movie domain word pairs 1009 Adjective Noun (AN) and 357 Noun Noun (NN) Training corpus: 116M web snippets Extra training on fusion schemes for weights estimation Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 38/49
  • 42. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Classification Accuracy for AN and NN word pairs U−SAMB−SAM 1D 3D Avg W.Avg MSE−Based 74 76 80 84 86 88 Affective models ClassificationAccuracy(%) NN AN Chance − NN Chance − AN Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 39/49
  • 43. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Classification Accuracy for AN and NN word pairs U−SAMB−SAM 1D 3D Avg W.Avg MSE−Based 74 76 80 84 86 88 Affective models ClassificationAccuracy(%) NN AN Chance − NN Chance − AN Compositional models work better than B-SAMs but worse than U-SAMs Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 39/49
  • 44. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Classification Accuracy for AN and NN word pairs U−SAMB−SAM 1D 3D Avg W.Avg MSE−Based 74 76 80 84 86 88 Affective models ClassificationAccuracy(%) NN AN Chance − NN Chance − AN Compositional models work better than B-SAMs but worse than U-SAMs Highest performance achieved for fusion of compositional and non-compositional models Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 39/49
  • 45. Introduction Affective models Experiments and Results Q&A Conclusions Compositional Affective Model Classification Accuracy for AN and NN word pairs U−SAMB−SAM 1D 3D Avg W.Avg MSE−Based 74 76 80 84 86 88 Affective models ClassificationAccuracy(%) NN AN Chance − NN Chance − AN Compositional models work better than B-SAMs but worse than U-SAMs Highest performance achieved for fusion of compositional and non-compositional models Small differences between 1D and 3D models Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 39/49
  • 46. Introduction Affective models Experiments and Results Q&A Conclusions Sentence level affective models Evaluation on News Headlines Valence estimation of 1000 news headlines aggregating affective ratings Affective Model Classification Accuracy (%) Chance 52.6 Content Words All words Average 72.4 70.9 Weighted Average 71.6 73.1 Maximum absolute valence 67 66.4 Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 40/49
  • 47. Introduction Affective models Experiments and Results Q&A Conclusions Sentence level affective models Evaluation on Movie Subtitles Valence estimation of movie subtitles from 12 movies Annotate subtitles on Valence through Crowdsourcing Leave-one-movie-out scheme Average performance for all the movies as a function of the seeds 10 50 100 200 300 400 500 600 700 800 0.5 0.55 0.6 0.65 0.7 Movies subtitles Dataset Seeds ClassificationAccuracy 10 50 100 200 300 400 500 600 700 800 0 0.1 0.2 0.3 0.4 Movies subtitles Dataset Seeds Correlation Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 41/49
  • 48. Introduction Affective models Experiments and Results Q&A Conclusions Outline 1 Introduction Motivation Emotion Contributions 2 Affective models Semantic Affective Model Compositional Affective Model Sentence level Affective Models 3 Experiments and Results Semantic - Affective model Compositional Affective Model Sentence level affective models 4 Q&A 5 Conclusions Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 42/49
  • 49. Introduction Affective models Experiments and Results Q&A Conclusions How the sentence level models perform on real data? Twitter (written text) Polarity detection task (positive vs. negative tweets) Classifier with affective features trained on tweets Evaluation metric: average recall of positive, negative class ρ System ρ Baseline 0.821 LYS (Spain 0.791 Amazon 0.784 Spoken Dialogue (transcriptions of speech) The same utterance can be expressed with different emotion Affective text models usually don’t work for short utterances Moderate performance is reached for larger utterances of real dialogues Performance improves when fusing with speech system Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 43/49
  • 50. Introduction Affective models Experiments and Results Q&A Conclusions How the sentence level models perform on real data? Twitter (written text) Polarity detection task (positive vs. negative tweets) Classifier with affective features trained on tweets Evaluation metric: average recall of positive, negative class ρ System ρ Baseline 0.821 LYS (Spain 0.791 Amazon 0.784 Spoken Dialogue (transcriptions of speech) The same utterance can be expressed with different emotion Affective text models usually don’t work for short utterances Moderate performance is reached for larger utterances of real dialogues Performance improves when fusing with speech system Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 43/49
  • 51. Introduction Affective models Experiments and Results Q&A Conclusions How the sentence level models perform on real data? Twitter (written text) Polarity detection task (positive vs. negative tweets) Classifier with affective features trained on tweets Evaluation metric: average recall of positive, negative class ρ System ρ Baseline 0.821 LYS (Spain 0.791 Amazon 0.784 Spoken Dialogue (transcriptions of speech) The same utterance can be expressed with different emotion Affective text models usually don’t work for short utterances Moderate performance is reached for larger utterances of real dialogues Performance improves when fusing with speech system Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 43/49
  • 52. Introduction Affective models Experiments and Results Q&A Conclusions Can SAM be applied on a language with no affective lexicon lexicon? 1 Create a new affective lexicon 2 Use cross-language modeling Translate the words of an already existing affective lexicon Use the other language’s affective ratings 0 100 200 300 400 500 600 0.75 0.8 0.85 0.9 0.95 Seeds ClassificationAccuracy S: Greek, T: Portuguese S: English, T: Portuguese S: Spanish, T: Portuguese Portuguese S: Greek, English, Spanish Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 44/49
  • 53. Introduction Affective models Experiments and Results Q&A Conclusions Outline 1 Introduction Motivation Emotion Contributions 2 Affective models Semantic Affective Model Compositional Affective Model Sentence level Affective Models 3 Experiments and Results Semantic - Affective model Compositional Affective Model Sentence level affective models 4 Q&A 5 Conclusions Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 45/49
  • 54. Introduction Affective models Experiments and Results Q&A Conclusions Conclusions Affective models for emotion detection of various granularity lexical units We showed that SAM for words Is language and affective dimension independent Performance depends on the weights estimation method We showed that Cross language SAM performs equally well Compositional models can be applied on affective space The nature of the written data determines the performance of the sentence level model Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 46/49
  • 55. Introduction Affective models Experiments and Results Q&A Conclusions Future work Identify parameters that define compositionality Employ compositional semantics on compositional model Ambiguous interaction between the words of the word pair Incorporate morphological information in different languages’ SAMs Compositional models for sentences Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 47/49
  • 56. Introduction Affective models Experiments and Results Q&A Conclusions Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 48/49
  • 57. Introduction Affective models Experiments and Results Q&A Conclusions References Malandrakis et al. 2013 N. Malandrakis, A. Potamianos, E. Iosif and S. Narayanan. 2013. “Distributional Semantic Models for Affective Text Analysis”. IEEE Transactions on Audio, Speech and Language Processing. 2013 Malandrakis et al. 2014 N.Malandrakis, A. Potamianos, K. J. Hsu , K. N. Babeva, M. C. Feng , G. C. Davison , S. Narayanan, 2014 “Affective Language Model Adaptation Via Corpus Selection”, ICASSP 2014 Turney and Littman 2002 P. Turney and M. L. Littman, “Unsupervised Learning of Semantic Orientation from a Hundred-Billion-Word Corpus. Technical report ERC-1094 (NRC 44929),” National Research. Council of Canada, 2002. Mitchell, J., and Lapata 2008 J. Mitchell and M. Lapata. Vector-based models of semantic composition. In Proc. of (ACL), pages 236244. 2008 Mitchell, J., and Lapata 2010 Mitchell, J., and Lapata, M. Composition in distributional models of semantics. Cognitive science 34, 8 (2010) Baroni and Zamparelli 2010 Baroni, M., and Zamparelli., R. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In in Proc. of EMNLP (2010). Elisavet Palogiannidi TUC Affective Analysis and Modeling of Spoken Dialogue Transcripts 49/49