SlideShare une entreprise Scribd logo
1  sur  96
Télécharger pour lire hors ligne
Sound Events and Emotions: Investigating the
Relation of Rhythmic Characteristics and Arousal
K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1
1Digital Audio Processing & Applications Group, Audiovisual Signal Processing
Laboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece
2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, Aristotle
University of Thessaloniki, Thessaloniki, Greece
Agenda
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Structured form of sound
General sound
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Structured form of sound
General sound
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm
related features
For each
segment
Statistical
measures
Extracted Features Statistical Measures
Beat spectrum Mean
Onsets Standard deviation
Tempo Gradient
Fluctuation Kurtosis
Event density Skewness
Pulse clarity
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
Upper group
Beatspectrum std
Event density std
Onsets gradient
Fluctuation kurtosis
Beatspectrum gradient
Pulse clarity std
Fluctuation mean
Fluctuation std
Fluctuation skewness
Onsets skewness
Pulse clarity kurtosis
Event density kurtosis
Onsets mean
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Window length: 1.4 seconds
Algorithm utilised: Artificial Neural network
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature work
Features & Dimensions
Other features related to arousal
Connection of features with valence
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature work
Features & Dimensions
Other features related to arousal
Connection of features with valence
Thank you!

Contenu connexe

Tendances

Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1
a2media15b
 
The echo nest-music_discovery(1)
The echo nest-music_discovery(1)The echo nest-music_discovery(1)
The echo nest-music_discovery(1)
Sophia Yeiji Shin
 

Tendances (14)

北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
 
machine learning x music
machine learning x musicmachine learning x music
machine learning x music
 
Poster vega north
Poster vega northPoster vega north
Poster vega north
 
Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)
 
Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021
 
Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1
 
Unit 22 radio script
Unit 22 radio scriptUnit 22 radio script
Unit 22 radio script
 
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesLearning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
 
Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)
 
Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)
 
The echo nest-music_discovery(1)
The echo nest-music_discovery(1)The echo nest-music_discovery(1)
The echo nest-music_discovery(1)
 
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
 
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
 
How to create a podcast
How to create a podcastHow to create a podcast
How to create a podcast
 

En vedette (6)

Using Technology to Create Emotions
Using Technology to Create EmotionsUsing Technology to Create Emotions
Using Technology to Create Emotions
 
TV Drama - Editing
TV Drama - EditingTV Drama - Editing
TV Drama - Editing
 
TV Drama - Sound
TV Drama - SoundTV Drama - Sound
TV Drama - Sound
 
Camera shots in tv drama
Camera shots in tv dramaCamera shots in tv drama
Camera shots in tv drama
 
Sound in TV Drama
Sound in TV DramaSound in TV Drama
Sound in TV Drama
 
Module 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From ItModule 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From It
 

Similaire à Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal

Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docxMusic 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
rosemarybdodson23141
 
groman_shin_finalreport
groman_shin_finalreportgroman_shin_finalreport
groman_shin_finalreport
Daniel Shin
 
Beginning band meeting
Beginning band meetingBeginning band meeting
Beginning band meeting
IMS Bands
 
The Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioThe Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res Audio
Kendra Cote
 
2 audiological evaluation
2 audiological evaluation2 audiological evaluation
2 audiological evaluation
Dr_Mo3ath
 
TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)
Domenica Fanelli
 
Research Proposal
Research ProposalResearch Proposal
Research Proposal
Ryan Pelon
 
In this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docxIn this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docx
jaggernaoma
 
Musician instructor talk
Musician instructor talkMusician instructor talk
Musician instructor talk
Mark Rauterkus
 
Acoustics and basic audiometry
Acoustics and basic audiometryAcoustics and basic audiometry
Acoustics and basic audiometry
bethfernandezaud
 
Attention working memory
Attention working memoryAttention working memory
Attention working memory
혜원 정
 

Similaire à Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal (20)

Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docxMusic 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
 
groman_shin_finalreport
groman_shin_finalreportgroman_shin_finalreport
groman_shin_finalreport
 
Beginning band meeting
Beginning band meetingBeginning band meeting
Beginning band meeting
 
Introduction to emotion detection
Introduction to emotion detectionIntroduction to emotion detection
Introduction to emotion detection
 
The Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioThe Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res Audio
 
Sound physics
Sound physicsSound physics
Sound physics
 
The Effect of Music on Memory
The Effect of Music on MemoryThe Effect of Music on Memory
The Effect of Music on Memory
 
2 audiological evaluation
2 audiological evaluation2 audiological evaluation
2 audiological evaluation
 
TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)
 
Research Proposal
Research ProposalResearch Proposal
Research Proposal
 
Acoustic analyzer
Acoustic analyzerAcoustic analyzer
Acoustic analyzer
 
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptxAIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
 
In this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docxIn this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docx
 
Automatic Speech Recognition
Automatic Speech RecognitionAutomatic Speech Recognition
Automatic Speech Recognition
 
Musician instructor talk
Musician instructor talkMusician instructor talk
Musician instructor talk
 
Acoustics and basic audiometry
Acoustics and basic audiometryAcoustics and basic audiometry
Acoustics and basic audiometry
 
Attention working memory
Attention working memoryAttention working memory
Attention working memory
 
Audio Engineering Program
Audio Engineering ProgramAudio Engineering Program
Audio Engineering Program
 
Attention and the refinement of auditory expectations: Hafter festschrift talk
Attention and the refinement of auditory expectations: Hafter festschrift talkAttention and the refinement of auditory expectations: Hafter festschrift talk
Attention and the refinement of auditory expectations: Hafter festschrift talk
 
Introduction to Language and Linguistics 003: Introduction to Phonology
Introduction to Language and Linguistics 003: Introduction to PhonologyIntroduction to Language and Linguistics 003: Introduction to Phonology
Introduction to Language and Linguistics 003: Introduction to Phonology
 

Dernier

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

Dernier (20)

08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 

Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal

  • 1. Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1 1Digital Audio Processing & Applications Group, Audiovisual Signal Processing Laboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece 2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, Aristotle University of Thessaloniki, Thessaloniki, Greece
  • 2. Agenda 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 3. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 4. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 5. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 6. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 7. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 8. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 9. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Structured form of sound General sound Interaction with stimuli Reactions Emotions felt
  • 10. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Structured form of sound General sound Interaction with stimuli Reactions Emotions felt
  • 11. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 12. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 13. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 14. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 15. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 16. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 17. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 18. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 19. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 20. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 21. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 22. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 23. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 24. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 25. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 26. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 27. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 28. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 29. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 30. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 31. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 32. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 33. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 34. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 35. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 36. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 37. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 38. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 39. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 40. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 41. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 42. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 43. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 44. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 45. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 46. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 47. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 48. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 49. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 50. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 51. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 52. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 53. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 54. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 55. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 56. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 57. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 58. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 59. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 60. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 61. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 62. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 63. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Extracted Features Statistical Measures Beat spectrum Mean Onsets Standard deviation Tempo Gradient Fluctuation Kurtosis Event density Skewness Pulse clarity
  • 64. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 65. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 66. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 67. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 68. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 69. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 70. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 71. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 72. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 73. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 74. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 75. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation Upper group Beatspectrum std Event density std Onsets gradient Fluctuation kurtosis Beatspectrum gradient Pulse clarity std Fluctuation mean Fluctuation std Fluctuation skewness Onsets skewness Pulse clarity kurtosis Event density kurtosis Onsets mean
  • 76. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 77. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 78. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 79. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26%
  • 80. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26%
  • 81. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26% Window length: 1.4 seconds Algorithm utilised: Artificial Neural network
  • 82. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 83. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 84. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 85. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 86. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 87. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 88. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 89. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 90. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 91. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 92. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 93. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 94. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature work Features & Dimensions Other features related to arousal Connection of features with valence
  • 95. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature work Features & Dimensions Other features related to arousal Connection of features with valence