SlideShare a Scribd company logo
1 of 66
Identifying Emotions
 Expressed by Mobile Users
through 2D and 3D Gestures
     Céline Coutrix, Nadine Mandran
CNRS & Laboratoire d’Informatique de Grenoble




                      1
Outline

• Context, Problem & Goal
• Field Study
• Results
• Conclusion

                   2
Context
• Users may whish to express emotions
  anytime, anywhere, e.g.:
 • To communicate (to social network, to
    very close friends, to doctor, etc)
 • To adapt interaction to increase
    performance or satisfaction
 • etc.
                      3
Context
• Nowadays techniques for identifying
  emotions
 • facial expressions,
 • thermal imaging of faces,
 • vocal intonations,
 • language,
 • galvanic skin response,
 • electromyography of the face,
 • heart rate,
 • etc.              4
Context

• Nowadays techniques for identifying
  emotions
 ➡ Instrusive
 ➡ Expensive
 ➡ Not suitable while mobile
                     5
Context

• Techniques for identifying emotions
 ➡ Discreet
 ➡ Cheap
 ➡ Mobile
                  ?
                      6
Context

• Techniques for identifying emotions
 ➡ Discreet
 ➡ Cheap
                                  on  es   ?
 ➡ Mobile Smar              tph
                   7
Context

• Techniques for identifying emotions
 ➡ Discreet                  rin   gw   i th
     Cheap Ge        stu                 s?
 ➡
 ➡ Mobile Smartp                ho    ne

                   8
Problem
• Is it possible to identify emotions explicitely
  expressed by mobile users through
  gestures ?



                       ?

                       9
Problem
subjects   gestures   sensing        computation
                                                   ?   emotions




                                10
Problem
subjects   gestures   sensing        computation
                                                   ?   emotions




  • Are all subjects similar or are subjects specific ?
  • Which gestures to study ?
  • What dimension(s) need to be captured?
  • What is the minimum relevant information to
     compute?
                                11
Problem
subjects   gestures   sensing        computation
                                                   ?   emotions




  • Are all subjects similar or are subjects specific ?
  • Which gestures to study ?
  • What dimension(s) need to be captured?
  • What is the minimum relevant information to
     compute?
                                12
Problem
subjects   gestures   sensing        computation
                                                   ?   emotions




  • Are all subjects similar or are subjects specific ?
  • Which gestures to study ?
  • What dimension(s) need to be captured?
  • What is the minimum relevant information to
     compute?
                                13
Problem
subjects   gestures   sensing        computation
                                                   ?   emotions




  • Are all subjects similar or are subjects specific ?
  • Which gestures to study ?
  • What dimension(s) need to be captured?
  • What is the minimum relevant information to
     compute?
                                14
Goal
• Uncover directions for more focused studies
  • Which subjects, gestures, sensing and
     computation will we need to study?

                    ✘ ✔     ✘
                                ✔
                                 ✔



                       15
Outline

• Context, Problem & Goal
• Field Study
• Results
• Conclusion

                   16
Field Study: Goal
•   Are all subjects similar or are subjects specific ?
    ➡ Explore several subjects (⊃ specific subjects)
•   Which gestures to study ?
    ➡ Explore free gesturing (⊃ specific types of gestures)
•   What dimension(s) need to be captured?
    ➡ Explore accelerometers and touch events
•   What is the minimum relevant information to compute?
    ➡ Explore a wide range of descriptors of gestures
                               17
Field Study: Goal


• Collect gestures and emotions
• Realistic setting


                     18
Field Study: Data Collection
 • 12 adult subjects
 • Expert everyday users of mobile tactile devices
 • Diverse profiles:
  • 6 were males and 6 females,
  • Aged 25-47 (mean= 33, s.d.= 7),
  • Areas of work: software, biology, didactics,
     social worker, executive assistant, ski instructor
     and mountain guide
                          19
Field Study: Data Collection
 • Experience Sampling Method
  • Application installed on their personal, usual
     mobile device,
   • Gestures reported by users
    • touchscreen events and accelerometers
   • Emotions reported by users (PAD model)
   • During 15 days (at least)
                        20
Field Study: Data Collection
• Both event-contingent sampling and signal-contingent
  sampling:
  • Report when experiencing a particular affective state
  ➡ not to miss intense, seldomly occuring affective
    states
  • Report when receiving text message (1 per day, at a
    random time)
  ➡ to capture subsequent data per subjects
  ➡ to capture more neutral affective states
                            21
Field Study: Data Collection
    Performing a gesture




             22
Field Study: Data Collection
   Reporting an emotion




             23
Field Study: Data Collection
     Reporting context




             24
Field Study: Data Collection
      End of reporting




             25
Field Study: Data Collection

  • 3 interviews in the study:
   • beginning
   • middle
   • end of the two weeks

                        26
Field Study: Dataset
• 188 valid samples (not unfinished or
  performed during interviews)
• Subjects reported:
 • between 6 and 27 samples
 • average=15.7 samples
 • median=15.5
 • s.d.=6.84
                    27
Computing              Sample 156



Descriptors

                       Sample 34
• From raw logs


                  28
Computing Descriptors

 • 3D Motion descriptors
  • Acceleration           Sample 34


  • Jerk (derivative)
  • Spectrum

                  29
Computing
    Descriptors

• 3D motion descriptors
 • Raw acceleration
 • High-pass filtered acceleration
 • Low-pass filtered acceleration (spectrum)

                       30
Computing Descriptors


 • 2D Surface descriptors   Sample 34
  • Touches’ level
  • Strokes’ level

                   31
Computing                       x

Descriptors

                            y
• XY(Z) Projections
• (3D vector length)


                       32
Computing Descriptors
•   2D surface descriptors                  Sample 34

    •   Location
    •   Slope
    •   Speed
    •   Length (strokes)


    •   Number and total length (strokes)
                             33
Computing Descriptors


• Signed values
• Absolute values


                    34
Computing Descriptors

• Minimum
• Maximum
• Amplitude
• Median
• Mean
• Variance
              35
Computing Descriptors

• Spectrum
 • Gap that maximizes the difference between
    most important and less important
    frequencies
 • Number of important frequencies
 • Most important frequency
                    36
From descriptors

• 249 descriptors:
 • Statistically analyzed to find
   • the relevant/unrelevant information
   • the redundant/unredundant information

                       37
Outline

• Context, Problem & Goal
• Field Study
• Results
• Conclusion

                   38
Results: Goal
•   Are all subjects similar or are subjects specific ?
    ➡ Aggregated subjects vs. Specific subjects
•   Which gestures to study ?
    ➡ Free gesturing vs. Specific types of gestures
•   What dimension(s) need to be captured?
    ➡ Do we need more sensors?
•   What is the minimum relevant information to compute?
    ➡ Relevant/unrelevant and redundant/unredundant
      descriptors              39
Results: Agreggated or
   subjects specific?
• For aggregated subjects
   • Correlations low (<|0.40|)
        little relationship between single
     descriptors and PAD



                       40
Results: Agreggated or
   subjects specific?
• For specific subjects




➡ Further work need to investigate
  correlations for single subjects
                        41
Results: Goal
•   Are all subjects similar or are subjects specific ?
    ➡ Aggregated subjects vs. Specific subjects
•   Which gestures to study ?
    ➡ Free gesturing vs. Specific types of gestures
•   What dimension(s) need to be captured?
    ➡ Do we need more sensors?
•   What is the minimum relevant information to compute?
    ➡ Relevant/unrelevant and redundant/unredundant
      descriptors              42
Free vs. Specific Gestures

 • Which gestures to study ?
  ➡ Manual exploration of dataset



                    43
Free vs. Specific Gestures
 • Moving in all directions after holding still




   • express a high arousal
                        44
Free vs. Specific Gestures
 • Tapping




             45
Free vs. Specific Gestures
 • Tapping : Dual model?
  • Number of strokes could either follow
     Pleasure and/or Arousal
  • Would make sense
  • Needs confirmation

                     46
Free vs. Specific Gestures
 • Tapping: Number of strokes      pleasure ?




      ZP




               Number of Strokes
                      47
Free vs. Specific Gestures
 • Tapping: Number of strokes      pleasure ?




      ZP




               Number of Strokes
                      48
Free vs. Specific Gestures
 • Tapping: Number of strokes      pleasure ?




      ZP




               Number of Strokes
                      49
Free vs. Specific Gestures
• Tapping: Number of strokes   Pleasure or Arousal ?




                        50
Results: Goal
•   Are all subjects similar or are subjects specific ?
    ➡ Aggregated subjects vs. Specific subjects
•   Which gestures to study ?
    ➡ Free gesturing vs. Specific types of gestures
•   What dimension(s) need to be captured?
    ➡ Do we need more sensors?
•   What is the minimum relevant information to compute?
    ➡ Relevant/unrelevant and redundant/unredundant
      descriptors              51
Results: Need for More
          Information
• Arousal has the most               Number of descriptors correlated

  correlations with        100

  descriptors
                                75

• Need more information
  for Pleasure and              50

  Dominance
                                25
 • Descriptors, sensors,
    gestures, subjects?          0
                                     Pleasure     Arousal   Dominance
                           52
Results: Goal
•   Are all subjects similar or are subjects specific ?
    ➡ Aggregated subjects vs. Specific subjects
•   Which gestures to study ?
    ➡ Free gesturing vs. Specific types of gestures
•   What dimension(s) need to be captured?
    ➡ Do we need more sensors?
•   What is the minimum relevant information to compute?
    ➡ Relevant/unrelevant and redundant/unredundant
      descriptors              53
Results: Relevant and
 Unrelevant Descriptors
          Number of descriptors correlated to at least one affective dimension
                              90

                            67.5

• 3D motion                   45
  descriptors has the
                            22.5
  most correlations
  to affective                 0
                                    3D motion descriptors 2D surface descriptors
  dimensions

                               54
Results: Example 3D Motion
         Descriptor
• Amplitude of the length of the derivative of high-pass
  filtered acceleration and Arousal




                     calm excited
• r = −0.31
                            55
Results: Redundant and
Unredundant Descriptors

• Can we group redundant descriptors to
  increase relationship between unredundant
  groups of descriptors and P, A, D ?
   ➡Principal component analysis


                     56
Results: Redundant and
 Unredundant Descriptors
• 92 significatively correlated descriptors
• Reduce to descriptors most correlated to PAD and
  less correlated between each other
 • if we keep 5 descriptors, then we keep
  • 85% of the correlation with descriptors for
      Arousal
   • 104% of the correlation with descriptors for
      Pleasure and Dominance
                         57
Results: Redundant and
    Unredundant Descriptors

•   5 descriptors

    ‣ FAccAmplitudeY and FAccMinX
    ‣ FAccAmplitudeZ and FAccMinZ
    ‣ GapBetweenHighLowSpectrumY

                          58
Results: Relevant and
unredundant descriptors
• In particular, some unredundant groups of
  similar descriptors tend to evolve like Arousal
 • Minima of z projections of acceleration and
    jerk
 • Duration and spectral descriptors on x and
    y projections of raw or low-passed filtered
    acceleration
• Can reduce down to 3 descriptors for Arousal
                       59
Outline

• Context, Problem & Goal
• Field Study
• Results
• Conclusion

                   60
Conclusion
subjects     gestures   sensing        computation
                                                     ?   emotions



     • Are all subjects similar or are subjects
           specific ?
           ➡ Specific subjects need to be explored
             for free gesturing



                                  61
Conclusion
subjects   gestures   sensing        computation
                                                   ?   emotions




     • Which gestures to study ?
      ➡ Free gesturing, but specific types of
           gestures are promising, e.g. tapping




                                62
Conclusion
subjects   gestures   sensing        computation
                                                   ?   emotions




     • What dimension(s) need to be captured?
      ➡ Touches for particular gestures
      ➡ 3D motion through accelerometers
      ➡ Need for more
                                63
Conclusion
subjects     gestures    sensing        computation
                                                      ?   emotions



     • What is the minimum relevant information
           to compute?
           ➡ 5 descriptors can be enough
           ➡ 3 descriptors can be enough for Arousal

                                   64
Thank you!
Questions?


    65
Identifying Emotions
 Expressed by Mobile Users
through 2D and 3D Gestures
     Céline Coutrix, Nadine Mandran
CNRS & Laboratoire d’Informatique de Grenoble




                     66

More Related Content

Viewers also liked

Nieuwsbrief Harm
Nieuwsbrief HarmNieuwsbrief Harm
Nieuwsbrief HarmEveline2785
 
Fiscal Strategy Review Green Paper
Fiscal Strategy Review Green PaperFiscal Strategy Review Green Paper
Fiscal Strategy Review Green PaperStates of Jersey
 
Pedagogy and ict
Pedagogy and ictPedagogy and ict
Pedagogy and ictSallyAitken
 
Science Behind Climate Change
Science Behind Climate ChangeScience Behind Climate Change
Science Behind Climate Changemeghanoakley
 
Arqu hardware 11 - fuentes de poder (63170)
Arqu hardware   11 - fuentes de poder (63170)Arqu hardware   11 - fuentes de poder (63170)
Arqu hardware 11 - fuentes de poder (63170)jecalde1993
 
AVI06 mixed reality coutrix
AVI06 mixed reality coutrixAVI06 mixed reality coutrix
AVI06 mixed reality coutrixCéline Coutrix
 
Arqu hardware 02 - sockets (63170)
Arqu hardware   02 - sockets (63170)Arqu hardware   02 - sockets (63170)
Arqu hardware 02 - sockets (63170)jecalde1993
 
OP: A Novel Programming Model for Integrated Design and Prototyping of Mixed...
OP: A Novel Programming Model  for Integrated Design and Prototyping of Mixed...OP: A Novel Programming Model  for Integrated Design and Prototyping of Mixed...
OP: A Novel Programming Model for Integrated Design and Prototyping of Mixed...Céline Coutrix
 
BuchiReddy 5+Years Resume
BuchiReddy 5+Years ResumeBuchiReddy 5+Years Resume
BuchiReddy 5+Years Resumebuchireddy6
 

Viewers also liked (10)

Nieuwsbrief Harm
Nieuwsbrief HarmNieuwsbrief Harm
Nieuwsbrief Harm
 
Fiscal Strategy Review Green Paper
Fiscal Strategy Review Green PaperFiscal Strategy Review Green Paper
Fiscal Strategy Review Green Paper
 
Pedagogy and ict
Pedagogy and ictPedagogy and ict
Pedagogy and ict
 
Science Behind Climate Change
Science Behind Climate ChangeScience Behind Climate Change
Science Behind Climate Change
 
Arqu hardware 11 - fuentes de poder (63170)
Arqu hardware   11 - fuentes de poder (63170)Arqu hardware   11 - fuentes de poder (63170)
Arqu hardware 11 - fuentes de poder (63170)
 
3.- 4.kl.
3.- 4.kl.3.- 4.kl.
3.- 4.kl.
 
AVI06 mixed reality coutrix
AVI06 mixed reality coutrixAVI06 mixed reality coutrix
AVI06 mixed reality coutrix
 
Arqu hardware 02 - sockets (63170)
Arqu hardware   02 - sockets (63170)Arqu hardware   02 - sockets (63170)
Arqu hardware 02 - sockets (63170)
 
OP: A Novel Programming Model for Integrated Design and Prototyping of Mixed...
OP: A Novel Programming Model  for Integrated Design and Prototyping of Mixed...OP: A Novel Programming Model  for Integrated Design and Prototyping of Mixed...
OP: A Novel Programming Model for Integrated Design and Prototyping of Mixed...
 
BuchiReddy 5+Years Resume
BuchiReddy 5+Years ResumeBuchiReddy 5+Years Resume
BuchiReddy 5+Years Resume
 

Similar to Identifying Emotions Expressed by Mobile Users through 2D Surface and 3D Motion Gestures (Ubicomp 2012)

Multimodal Learning Analytics
Multimodal Learning AnalyticsMultimodal Learning Analytics
Multimodal Learning AnalyticsXavier Ochoa
 
Experimental evaluation of five methods for collecting emotions in field sett...
Experimental evaluation of five methods for collecting emotions in field sett...Experimental evaluation of five methods for collecting emotions in field sett...
Experimental evaluation of five methods for collecting emotions in field sett...Debaleena Chattopadhyay
 
Assessing computational thinking
Assessing computational thinkingAssessing computational thinking
Assessing computational thinkingDaniel Duckworth
 
Data Con LA 2022 - Real world consumer segmentation
Data Con LA 2022 - Real world consumer segmentationData Con LA 2022 - Real world consumer segmentation
Data Con LA 2022 - Real world consumer segmentationData Con LA
 
[Seminar] 200910 Hyeonwook Lee
[Seminar] 200910 Hyeonwook Lee[Seminar] 200910 Hyeonwook Lee
[Seminar] 200910 Hyeonwook Leeivaderivader
 
Machine Learning Essentials Demystified part1 | Big Data Demystified
Machine Learning Essentials Demystified part1 | Big Data DemystifiedMachine Learning Essentials Demystified part1 | Big Data Demystified
Machine Learning Essentials Demystified part1 | Big Data DemystifiedOmid Vahdaty
 
Multimodal Learning Analytics
Multimodal Learning AnalyticsMultimodal Learning Analytics
Multimodal Learning AnalyticsXavier Ochoa
 
Beyond Eye Tracking: Using User Temperature, Rating Dials, and Facial Analysi...
Beyond Eye Tracking: Using User Temperature, Rating Dials, and Facial Analysi...Beyond Eye Tracking: Using User Temperature, Rating Dials, and Facial Analysi...
Beyond Eye Tracking: Using User Temperature, Rating Dials, and Facial Analysi...Jennifer Romano Bergstrom
 
Getting Started With User Research, Presented at Agile2010
Getting Started With User Research, Presented at Agile2010Getting Started With User Research, Presented at Agile2010
Getting Started With User Research, Presented at Agile2010Carol Smith
 
UX for a Mobile Age Audience
UX for a Mobile Age AudienceUX for a Mobile Age Audience
UX for a Mobile Age AudienceEnlighten
 
Developing affective constructs
Developing affective constructsDeveloping affective constructs
Developing affective constructsCarlo Magno
 
Code mashadvancedtopicsworkshop
Code mashadvancedtopicsworkshopCode mashadvancedtopicsworkshop
Code mashadvancedtopicsworkshoplisacrispin
 
Creative Problem Solving Skills For Staff
Creative Problem Solving Skills For StaffCreative Problem Solving Skills For Staff
Creative Problem Solving Skills For StaffPaul Nguyen
 
Object Oriented Paradigm
Object Oriented ParadigmObject Oriented Paradigm
Object Oriented ParadigmHüseyin Ergin
 

Similar to Identifying Emotions Expressed by Mobile Users through 2D Surface and 3D Motion Gestures (Ubicomp 2012) (20)

Lecture 1
Lecture 1Lecture 1
Lecture 1
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
Multimodal Learning Analytics
Multimodal Learning AnalyticsMultimodal Learning Analytics
Multimodal Learning Analytics
 
Experimental evaluation of five methods for collecting emotions in field sett...
Experimental evaluation of five methods for collecting emotions in field sett...Experimental evaluation of five methods for collecting emotions in field sett...
Experimental evaluation of five methods for collecting emotions in field sett...
 
Assessing computational thinking
Assessing computational thinkingAssessing computational thinking
Assessing computational thinking
 
Data Con LA 2022 - Real world consumer segmentation
Data Con LA 2022 - Real world consumer segmentationData Con LA 2022 - Real world consumer segmentation
Data Con LA 2022 - Real world consumer segmentation
 
[Seminar] 200910 Hyeonwook Lee
[Seminar] 200910 Hyeonwook Lee[Seminar] 200910 Hyeonwook Lee
[Seminar] 200910 Hyeonwook Lee
 
Tool criticism
Tool criticismTool criticism
Tool criticism
 
Machine Learning Essentials Demystified part1 | Big Data Demystified
Machine Learning Essentials Demystified part1 | Big Data DemystifiedMachine Learning Essentials Demystified part1 | Big Data Demystified
Machine Learning Essentials Demystified part1 | Big Data Demystified
 
SenseMakerTM Overview
SenseMakerTM OverviewSenseMakerTM Overview
SenseMakerTM Overview
 
Multimodal Learning Analytics
Multimodal Learning AnalyticsMultimodal Learning Analytics
Multimodal Learning Analytics
 
Beyond Eye Tracking: Using User Temperature, Rating Dials, and Facial Analysi...
Beyond Eye Tracking: Using User Temperature, Rating Dials, and Facial Analysi...Beyond Eye Tracking: Using User Temperature, Rating Dials, and Facial Analysi...
Beyond Eye Tracking: Using User Temperature, Rating Dials, and Facial Analysi...
 
Getting Started With User Research, Presented at Agile2010
Getting Started With User Research, Presented at Agile2010Getting Started With User Research, Presented at Agile2010
Getting Started With User Research, Presented at Agile2010
 
UX for a Mobile Age Audience
UX for a Mobile Age AudienceUX for a Mobile Age Audience
UX for a Mobile Age Audience
 
Developing affective constructs
Developing affective constructsDeveloping affective constructs
Developing affective constructs
 
Projektledelse og softwareinnovation
Projektledelse og softwareinnovationProjektledelse og softwareinnovation
Projektledelse og softwareinnovation
 
Code mashadvancedtopicsworkshop
Code mashadvancedtopicsworkshopCode mashadvancedtopicsworkshop
Code mashadvancedtopicsworkshop
 
Creative Problem Solving Skills For Staff
Creative Problem Solving Skills For StaffCreative Problem Solving Skills For Staff
Creative Problem Solving Skills For Staff
 
Research
ResearchResearch
Research
 
Object Oriented Paradigm
Object Oriented ParadigmObject Oriented Paradigm
Object Oriented Paradigm
 

Recently uploaded

Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 

Recently uploaded (20)

Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 

Identifying Emotions Expressed by Mobile Users through 2D Surface and 3D Motion Gestures (Ubicomp 2012)

  • 1. Identifying Emotions Expressed by Mobile Users through 2D and 3D Gestures Céline Coutrix, Nadine Mandran CNRS & Laboratoire d’Informatique de Grenoble 1
  • 2. Outline • Context, Problem & Goal • Field Study • Results • Conclusion 2
  • 3. Context • Users may whish to express emotions anytime, anywhere, e.g.: • To communicate (to social network, to very close friends, to doctor, etc) • To adapt interaction to increase performance or satisfaction • etc. 3
  • 4. Context • Nowadays techniques for identifying emotions • facial expressions, • thermal imaging of faces, • vocal intonations, • language, • galvanic skin response, • electromyography of the face, • heart rate, • etc. 4
  • 5. Context • Nowadays techniques for identifying emotions ➡ Instrusive ➡ Expensive ➡ Not suitable while mobile 5
  • 6. Context • Techniques for identifying emotions ➡ Discreet ➡ Cheap ➡ Mobile ? 6
  • 7. Context • Techniques for identifying emotions ➡ Discreet ➡ Cheap on es ? ➡ Mobile Smar tph 7
  • 8. Context • Techniques for identifying emotions ➡ Discreet rin gw i th Cheap Ge stu s? ➡ ➡ Mobile Smartp ho ne 8
  • 9. Problem • Is it possible to identify emotions explicitely expressed by mobile users through gestures ? ? 9
  • 10. Problem subjects gestures sensing computation ? emotions 10
  • 11. Problem subjects gestures sensing computation ? emotions • Are all subjects similar or are subjects specific ? • Which gestures to study ? • What dimension(s) need to be captured? • What is the minimum relevant information to compute? 11
  • 12. Problem subjects gestures sensing computation ? emotions • Are all subjects similar or are subjects specific ? • Which gestures to study ? • What dimension(s) need to be captured? • What is the minimum relevant information to compute? 12
  • 13. Problem subjects gestures sensing computation ? emotions • Are all subjects similar or are subjects specific ? • Which gestures to study ? • What dimension(s) need to be captured? • What is the minimum relevant information to compute? 13
  • 14. Problem subjects gestures sensing computation ? emotions • Are all subjects similar or are subjects specific ? • Which gestures to study ? • What dimension(s) need to be captured? • What is the minimum relevant information to compute? 14
  • 15. Goal • Uncover directions for more focused studies • Which subjects, gestures, sensing and computation will we need to study? ✘ ✔ ✘ ✔ ✔ 15
  • 16. Outline • Context, Problem & Goal • Field Study • Results • Conclusion 16
  • 17. Field Study: Goal • Are all subjects similar or are subjects specific ? ➡ Explore several subjects (⊃ specific subjects) • Which gestures to study ? ➡ Explore free gesturing (⊃ specific types of gestures) • What dimension(s) need to be captured? ➡ Explore accelerometers and touch events • What is the minimum relevant information to compute? ➡ Explore a wide range of descriptors of gestures 17
  • 18. Field Study: Goal • Collect gestures and emotions • Realistic setting 18
  • 19. Field Study: Data Collection • 12 adult subjects • Expert everyday users of mobile tactile devices • Diverse profiles: • 6 were males and 6 females, • Aged 25-47 (mean= 33, s.d.= 7), • Areas of work: software, biology, didactics, social worker, executive assistant, ski instructor and mountain guide 19
  • 20. Field Study: Data Collection • Experience Sampling Method • Application installed on their personal, usual mobile device, • Gestures reported by users • touchscreen events and accelerometers • Emotions reported by users (PAD model) • During 15 days (at least) 20
  • 21. Field Study: Data Collection • Both event-contingent sampling and signal-contingent sampling: • Report when experiencing a particular affective state ➡ not to miss intense, seldomly occuring affective states • Report when receiving text message (1 per day, at a random time) ➡ to capture subsequent data per subjects ➡ to capture more neutral affective states 21
  • 22. Field Study: Data Collection Performing a gesture 22
  • 23. Field Study: Data Collection Reporting an emotion 23
  • 24. Field Study: Data Collection Reporting context 24
  • 25. Field Study: Data Collection End of reporting 25
  • 26. Field Study: Data Collection • 3 interviews in the study: • beginning • middle • end of the two weeks 26
  • 27. Field Study: Dataset • 188 valid samples (not unfinished or performed during interviews) • Subjects reported: • between 6 and 27 samples • average=15.7 samples • median=15.5 • s.d.=6.84 27
  • 28. Computing Sample 156 Descriptors Sample 34 • From raw logs 28
  • 29. Computing Descriptors • 3D Motion descriptors • Acceleration Sample 34 • Jerk (derivative) • Spectrum 29
  • 30. Computing Descriptors • 3D motion descriptors • Raw acceleration • High-pass filtered acceleration • Low-pass filtered acceleration (spectrum) 30
  • 31. Computing Descriptors • 2D Surface descriptors Sample 34 • Touches’ level • Strokes’ level 31
  • 32. Computing x Descriptors y • XY(Z) Projections • (3D vector length) 32
  • 33. Computing Descriptors • 2D surface descriptors Sample 34 • Location • Slope • Speed • Length (strokes) • Number and total length (strokes) 33
  • 34. Computing Descriptors • Signed values • Absolute values 34
  • 35. Computing Descriptors • Minimum • Maximum • Amplitude • Median • Mean • Variance 35
  • 36. Computing Descriptors • Spectrum • Gap that maximizes the difference between most important and less important frequencies • Number of important frequencies • Most important frequency 36
  • 37. From descriptors • 249 descriptors: • Statistically analyzed to find • the relevant/unrelevant information • the redundant/unredundant information 37
  • 38. Outline • Context, Problem & Goal • Field Study • Results • Conclusion 38
  • 39. Results: Goal • Are all subjects similar or are subjects specific ? ➡ Aggregated subjects vs. Specific subjects • Which gestures to study ? ➡ Free gesturing vs. Specific types of gestures • What dimension(s) need to be captured? ➡ Do we need more sensors? • What is the minimum relevant information to compute? ➡ Relevant/unrelevant and redundant/unredundant descriptors 39
  • 40. Results: Agreggated or subjects specific? • For aggregated subjects • Correlations low (<|0.40|) little relationship between single descriptors and PAD 40
  • 41. Results: Agreggated or subjects specific? • For specific subjects ➡ Further work need to investigate correlations for single subjects 41
  • 42. Results: Goal • Are all subjects similar or are subjects specific ? ➡ Aggregated subjects vs. Specific subjects • Which gestures to study ? ➡ Free gesturing vs. Specific types of gestures • What dimension(s) need to be captured? ➡ Do we need more sensors? • What is the minimum relevant information to compute? ➡ Relevant/unrelevant and redundant/unredundant descriptors 42
  • 43. Free vs. Specific Gestures • Which gestures to study ? ➡ Manual exploration of dataset 43
  • 44. Free vs. Specific Gestures • Moving in all directions after holding still • express a high arousal 44
  • 45. Free vs. Specific Gestures • Tapping 45
  • 46. Free vs. Specific Gestures • Tapping : Dual model? • Number of strokes could either follow Pleasure and/or Arousal • Would make sense • Needs confirmation 46
  • 47. Free vs. Specific Gestures • Tapping: Number of strokes pleasure ? ZP Number of Strokes 47
  • 48. Free vs. Specific Gestures • Tapping: Number of strokes pleasure ? ZP Number of Strokes 48
  • 49. Free vs. Specific Gestures • Tapping: Number of strokes pleasure ? ZP Number of Strokes 49
  • 50. Free vs. Specific Gestures • Tapping: Number of strokes Pleasure or Arousal ? 50
  • 51. Results: Goal • Are all subjects similar or are subjects specific ? ➡ Aggregated subjects vs. Specific subjects • Which gestures to study ? ➡ Free gesturing vs. Specific types of gestures • What dimension(s) need to be captured? ➡ Do we need more sensors? • What is the minimum relevant information to compute? ➡ Relevant/unrelevant and redundant/unredundant descriptors 51
  • 52. Results: Need for More Information • Arousal has the most Number of descriptors correlated correlations with 100 descriptors 75 • Need more information for Pleasure and 50 Dominance 25 • Descriptors, sensors, gestures, subjects? 0 Pleasure Arousal Dominance 52
  • 53. Results: Goal • Are all subjects similar or are subjects specific ? ➡ Aggregated subjects vs. Specific subjects • Which gestures to study ? ➡ Free gesturing vs. Specific types of gestures • What dimension(s) need to be captured? ➡ Do we need more sensors? • What is the minimum relevant information to compute? ➡ Relevant/unrelevant and redundant/unredundant descriptors 53
  • 54. Results: Relevant and Unrelevant Descriptors Number of descriptors correlated to at least one affective dimension 90 67.5 • 3D motion 45 descriptors has the 22.5 most correlations to affective 0 3D motion descriptors 2D surface descriptors dimensions 54
  • 55. Results: Example 3D Motion Descriptor • Amplitude of the length of the derivative of high-pass filtered acceleration and Arousal calm excited • r = −0.31 55
  • 56. Results: Redundant and Unredundant Descriptors • Can we group redundant descriptors to increase relationship between unredundant groups of descriptors and P, A, D ? ➡Principal component analysis 56
  • 57. Results: Redundant and Unredundant Descriptors • 92 significatively correlated descriptors • Reduce to descriptors most correlated to PAD and less correlated between each other • if we keep 5 descriptors, then we keep • 85% of the correlation with descriptors for Arousal • 104% of the correlation with descriptors for Pleasure and Dominance 57
  • 58. Results: Redundant and Unredundant Descriptors • 5 descriptors ‣ FAccAmplitudeY and FAccMinX ‣ FAccAmplitudeZ and FAccMinZ ‣ GapBetweenHighLowSpectrumY 58
  • 59. Results: Relevant and unredundant descriptors • In particular, some unredundant groups of similar descriptors tend to evolve like Arousal • Minima of z projections of acceleration and jerk • Duration and spectral descriptors on x and y projections of raw or low-passed filtered acceleration • Can reduce down to 3 descriptors for Arousal 59
  • 60. Outline • Context, Problem & Goal • Field Study • Results • Conclusion 60
  • 61. Conclusion subjects gestures sensing computation ? emotions • Are all subjects similar or are subjects specific ? ➡ Specific subjects need to be explored for free gesturing 61
  • 62. Conclusion subjects gestures sensing computation ? emotions • Which gestures to study ? ➡ Free gesturing, but specific types of gestures are promising, e.g. tapping 62
  • 63. Conclusion subjects gestures sensing computation ? emotions • What dimension(s) need to be captured? ➡ Touches for particular gestures ➡ 3D motion through accelerometers ➡ Need for more 63
  • 64. Conclusion subjects gestures sensing computation ? emotions • What is the minimum relevant information to compute? ➡ 5 descriptors can be enough ➡ 3 descriptors can be enough for Arousal 64
  • 66. Identifying Emotions Expressed by Mobile Users through 2D and 3D Gestures Céline Coutrix, Nadine Mandran CNRS & Laboratoire d’Informatique de Grenoble 66

Editor's Notes

  1. \n
  2. \n
  3. We started from the idea that users may wish to express emotions or affective states, anytime and anywhere\nfor example, to comunicate emotions to distant others, social network or very close friends, doctors\nor for the interaction to adapt and increase performance or satisfaction\nAmong other purposes. \nJust like &amp;#x201C;static&amp;#x201D; affective interaction, users may benefits from mobile affective interaction. \n
  4. So far, researchers have been working on modalities in order to identify affective expressions,\nfor instance, facial expressions, thermal imaging, vocal intonations, language, galvanic skin response, electromyography of the face, heart rate, and so on. \n
  5. All these modalities are great, but when in mobile situation, they are either\n- Intrusive, like electrodes on the face,\n- Expensive, like a thermal camera\n- Not suitable for mobile use, like too heavy or wouldn&amp;#x2019;t work as good as in static situations for instance due to difficult lightning condition or background noise. \n
  6. So... can we find a technique for identifying emotions that would be discreet, cheap and mobile, i.e. working in any condition?\n
  7. We took the approach of using smartphones for this. \nThey are not literaly cheap, but they are in the sense that we already have them for other purposes and carry them everyday and everywhere. \nAnd of course, they are made to be mobile. \n
  8. Then, what modality could be explored on today&amp;#x2019;s mobile phones that can be discreet?\nCamera is difficult since it wouldn&amp;#x2019;t be usable in a pocket for example, when you don&amp;#x2019;t want people so see you using it. \nSo we explored gesturing with the device. \nIndeed there are very common hardware on smartphones that can be used for this, for instance accelerometers and touchscreens. \n
  9. So we investigated the following question:\nIs it possible to identify emotions explicitely expressed by mobile users through gestures?\n\nI would like to make clear that we address the problem of identifying to which extent such a link exists between intentional expressive gestures and emotions users explicitly wish to express. \nWe neither address the identification of intimate emotion nor the implicit gestural activity of a user throughout the day. \n
  10. The problem is very large and we need to rephrase it\n
  11. \n
  12. \n
  13. \n
  14. \n
  15. Researchers have contributed to this problem, by:\n1) first exploring surface and motion gestures spontaneously performed by users, but only in mapping to commands and not emotions\n2) second, as we said before, by exploring modalities to identify affective expressions but not for cheap, unintrusive and mobile purpose\nSo we wish to take a first step to mobile gestures for affective computing\n
  16. Our goal was to uncover directions for more focused studies to see where there could be a link between gestures and emotions\n
  17. \n
  18. In particular, we will try to answer these questions \n
  19. In order to test this idea, \nwe designed a field study in order to collect gestures and emotions to investigate their relationship\nWe tried to put as few contraints on users as possible, in order to have a realistic dataset. \n\n
  20. \n
  21. As gyroscopes were not present in all devices, we chose to leave them out for this study. \n\nWe could have done a longer study, but as this was the first, early one, we wanted to test the idea before doing a long, time consuming study. \n
  22. \n
  23. \n
  24. TODO : justify why not another model of emotions\n
  25. \n
  26. \n
  27. \n
  28. subjects&amp;#x2019; variance in reporting, but this is the chance we take to gather realistic data\n
  29. Among the 188 gestures reported, 36 are 3D only and 152 are combined 2D/3D gestures\nHere are two examples of the kind of data we collected\n
  30. \n
  31. with gravity filtered out \nx (left-right of device), y (bottom-top of device) and z (back-front of device)\n
  32. \n
  33. with gravity filtered out \nx (left-right of device), y (bottom-top of device) and z (back-front of device)\n
  34. \n
  35. with gravity filtered out \nx (left-right of device), y (bottom-top of device) and z (back-front of device)\n
  36. with gravity filtered out \nx (left-right of device), y (bottom-top of device) and z (back-front of device)\n
  37. 1) in order to reveal the periodicity of a gesture\n
  38. \n
  39. \n
  40. \n
  41. \n
  42. Here the coefficient is -0.31, one of the best coefficient we found, which is rather poor. \nSo we tried to find directions to explore to explain this low results and find a way to improve this result, we found interesting differences between subjects!!!\n
  43. \n
  44. We manually classified the samples \n
  45. A first interesting class was the one where the device was moving in all directions after being hold still for a moment. We classified these gestures as different from all others, because of these two phases. On the contrary, for instance, others have movement in all directions from the beginning to the end of the gesture. \n\nwith a 95% confidence but with small number of samples \n-&gt; need to be confirmed with a larger number of samples\n
  46. \n
  47. \n
  48. If we plot Pleasure given the number of strokes, at first sight, it seems noisy. \n\n
  49. Looking closer, there is a interesting pattern, \n\nTODO regarder si les plats ne peuvent pas etre des outliers\n
  50. reminded us a increase until a ceiling. \nbut with too many outliers in order to be promising. \n
  51. So we tried to explain the outliers. \nWhen plotting the number of strokes of tapped gestures along Arousal, \nwerealized they followed another pattern. \n\nSo it would be interesting to investigate if tapped gestures indeed follow this mixed model or not, with more than 18 samples.\nTODO insister sur le fait que c&amp;#x2019;est qqchose a tester avec plus d&amp;#x2019;echantillons\n
  52. \n
  53. \n
  54. \n
  55. \n
  56. \n
  57. Here is an example of a 3D motion descriptors that is correlated to Arousal : \nThe ...\nThis means that when the change in acceleration increase, the arousal increase too. \nBut the coefficient is -0.3, which is low. \n
  58. So we thought that we should try to group redundant descriptors together \nso that we improve the results by considering unredundant descriptors. \n
  59. \n
  60. The resulting space has 3 dimensions\nThese two graphs show two projections of the 3D space\nOn the right, on the surface defined by new axes 1 and 2\nOn the left, on the surface defined by the new axes 2 and 3. \n
  61. Blue vectors show each descriptor&amp;#x2019;s vector in the space. \nAs we will see, what is interesting here is the length of the vectors and the angle with axes, this is why Principal Component Analysis represents descriptors as vectors. \n
  62. In this space, similarly evolving descriptors have angles close to each other. \nAnd descriptors evolving differently are perpendicular. \n\nSo there is no need to consider all of them for a later study, but only one for each angle here. Otherwise it will be redundant. \n
  63. too much information gets noisy. \n
  64. too much information gets noisy. \n
  65. In addition, \nthe closer to unit circle a descriptor is, the more it contributed to the linear combination to build the axes.\nThat means, the more it helps explaining the differences between gestures. \n\nAs the projections on axis 1 are the longest, it is the where we can locate the gestures the easiest. \n
  66. For example, the two calm and excited samples we saw before are very far apart on axis one. \n
  67. In conclusion, in order to characterize a gesture with minimum number of descriptors in a future study, \nwe should consider the descriptors the closest to the unit circle, because these are the ones explaining most variations between gestures \nAND the ones that have the furthest angles, so that they explain different aspect of the variation and are not redundant. \n\n
  68. In conclusion, in order to characterize a gesture with minimum number of descriptors in a future study, \nwe should consider the descriptors the closest to the unit circle, because these are the ones explaining most variations between gestures \nAND the ones that have the furthest angles, so that they explain different aspect of the variation and are not redundant. \n\n
  69. In this space, PAD variables can be located along axes thanks to the known correlations to descriptors. \nAs we said before, correlations were low, so PAD vectors are far from unit circles. \n
  70. But there are descriptors that are in the same direction as affective dimensions. This means that they tend to evolve in a similar way as the affective dimensions. \n\nThere are 7 groups that we represented here as colored vectors whereas descriptors are points. \nAmong these,\n
  71. qd les verts augmentent , ZA tends to augmente\nqd les roses augmentes, ZA tends to diminue\n
  72. \n
  73. \n
  74. \n
  75. \n
  76. \n
  77. \n
  78. \n
  79. \n
  80. \n
  81. \n