SlideShare une entreprise Scribd logo
1  sur  6
Télécharger pour lire hors ligne
International Journal of Electronic Engineering Research
ISSN 0975 - 6450 Volume 2 Number 3 (2010) pp. 377–381
© Research India Publications
http://www.ripublication.com/ijeer.htm



              Independent Speaker Recognition for
                     Native English Vowels

       1
           G.N. Kodandaramaiah, 2M.N. Giriprasad and 3M. Mukunda Rao
        1
          HOD, Department of Electronics and Communications Engineering,
               Madanapalli Institute of Technology, Madanapalli, India
     2
       Principal, Jawaharlal Nehru Technological University, Pulivendula, India
                  3
                    Honorary Research Professor, Biomedical Sciences
       Sri Ramachandra Medical College & Research Institute, Chennai, India
                         E-mail: kodandramaiah@yahoo.com


                                       Abstract

   This paper presents the standard method for vocal tract shape estimation has
   been the basis for many successful automatic speech recognition (ASR)
   systems. Analytic results presented demonstrate that estimation of vocal tract
   shape, based on reflection co-efficients obtained from LPC analysis of speech,
   is satisfactory and is related to the place of articulation of the vowels. Here we
   describe a “standard” approach for classification of vowels based on formants,
   which are meaningfully distinguishable frequency components of human
   speech. These formant frequencies depend upon the shape and dimensions of
   the vocal tract, Vocal tract shape is characterized by a set of formant
   frequencies, and different sounds are produced by varying the shape of the
   vocal tract, leading to the property of spoken speech. It has been implemented
   in many of speech related applications such as, speech/speaker recognition.
   This work uses Euclidean distance measure, is applied in order to measure the
   similarity or the dissimilarity between two spoken words, which take place
   after quantizing a spoken word into its code book.

   Keywords: Speech, Vocal tract, Formants, Euclidean distance.


Introduction
The Fig 1.1 shows the block diagram of Independent Speaker Recognition for vowels.
Let S(n) be the test sample of a vowel. Then parameters i.e. formants F1 and F2 are
extracted.
378                                                        G.N. Kodandaramaiah et al

    The extracted formants are compared with the threshold of reference formants.
Euclidean distance measure is applied in order to measure the similarity or the
dissimilarity between two spoken words, which take place after quantizing a spoken
word into its code book. The matching of an unknown vowel is performed by
measuring the Euclidean distance between the features vector (formants) of the
unknown vowel to the reference model (codebook) of the known vowel formants F1,
F2 in the database. The goal is to find the codebook that has the minimum distance
measurement in order to identify the unknown vowel (Franti et al., 1997). For
example, in the testing or identification session, the Euclidean distance between the
features vector formants F1, F2 and codebook for each spoken vowel is calculated and
the vowel with the smallest average minimum distance is picked as shown in the Eq.
(1.1). Note that xi is the i th input features vector (formants F1, F2), yi is the i th
features vector in the codebook (Reference Model) and distance d is the distance
between xi and yi.

               d(x,y)=√[                            ]                                1.1

where D=2, xi is the ith input features vector (formants F1, F2), yi is the ith feature
vector in the code book (Reference Model) and d is the distance between xi and
yi.s,Wi = weight associated with ith feature vector, recognition score.


Decision Rule
The weights ‘w’ are important to use if the information contained in the underlining
features is not proportional to the feature variances. In this case of vowel recognition
based on formants F1 and F2, they do not uniformly contribute to vowel recognition.
Based on study, relative weights-F1 =2; F2 =1 are given but normalized such that the
sum of the weights is 1.0.
     We refer to classification based on this distance as Maximum Likelihood
Regression, since this is based on Gaussian assumptions used to obtain the parameters
in the classifier. To provide verification that the vowels displayed are producing
accurate results, the MLR has calculated the distance of average features for the given
vowels.If the feature distance is within the threshold criteria Di (F1, F2), then equation
1.1 becomes
                Di (f) < α√m                                                          1.2
where m is number of features i.e. F1 and F2, α is arbitrary scale factor used for
performance tuning. Then, the vector xi is identified as the vector yi, otherwise not. If
it is too small the MLR rejects many correct vowel samples. If it is too large the
output of category vowels will not be rejected. In our work the threshold α=x has
given optimum results.
Independent Speaker Recognition for Native English Vowels                                       379




                      Figure 1.1: Block diagram of vowel recognition.


Result of Vowel Ecognition of Male and Female Speakers
Male Speakers
The table 3.1 gives the result for male vowel recognition based on MLR method.
Vowel /a/ has achieved perfect classification compared to other vowels. The detection
rate for vowel /u/ and /e/ is better than vowel /o/ and /i/ for all tested samples. Vowel
/e/ and vowel /i/ tend to mis-classify with each other due to the variations of
utterances from different inter-speakers. The Fig 3.1 shows vowel ‘X’ versus %
vowel recognition for 50 male samples, where ‘X’ is the actual vowel.
    For vowel /a/, /a/ in /a/ is 46; /a/ in /e/ is 0; /a/ in /i/ is 4;/a/ in /o/ is 0;/a/ in /u/ is 0.
Hence the percentage correctness of recognition of vowel /a/ is = ( /a/ in /a/ )*
100÷(/a/ in all the vowels) = 46 * 100÷(46+0+4+0+0) = 46*100/50 = 92 %.




             Figure 3.1: Vowel Vs % vowel recognition for male speaker.
380                                                         G.N. Kodandaramaiah et al

   Table 3.1 Shows the percentage recognition for vowel of male speakers.


                      vowels            Predicted
                      Actual /a/ /e/ /i/ /o/ /u/ % correct
                        /a/  46 0 4 0           0  92%
                        /e/   2 44 0 4          0  89%
                        /i/   6 0 40 0          4  80%
                        /o/   0 3 0 44 3           88%
                        /u/   3 1 1 0 45           90%


Female Speakers
The table 3.2 gives the result for female vowel recognition based on MLR method.
Vowel /o/ has achieved perfect classification compared to other vowels. The detection
rate for vowel /u/ and /e/ is better than vowel /a/ and /i/ for all tested samples. Vowel
/a/ and vowel /i/ tend to mis-classify with each other due to the variations of
utterances from different inter-speakers. The Fig 3.2 shows the percentage of
recogniton of vowel for 40 female samples. For vowel /o/, /o/ in /a/ is 0; /o/ in /e/ is 0;
/o/ in /i/ is 0;/o/ in /o/ is 39;/o/ in /u/ is 1. Hence the Percentage correctness of
recognition of vowel /o/ is = ( /o/ in /o/ )* 100÷(/o/ in all the vowels) = 39 *
100÷(0+0+0+39+1) = 39*100/40 = 98 %.




           Figure 3.2: Vowel Vs % vowel recognition for female speaker.
Independent Speaker Recognition for Native English Vowels                        381

   Table 3.2 Shows percentage recognition of vowel for female speakers

                     Vowel /a/ /e/ /i/ /o/ /u/ %correct
                      /a/  34 4 0 0         2   85%
                      /e/   0 37 0 3        0   92%
                      /i/   0 4 34 0        2   86%
                      /o/   0 0 0 39 1          98%
                      /u/   3 0 0 0 37          94%


Conclussion
It was an attempt presents to the standard method for vocal tract shape estimation has
been the basis for many successful automatic speech recognition (ASR) systems. Here
we describe a “standard” approach for classification of vowels based on formants. We
achieved 80 to 95 percentage of speaker recognition using Euclidean distance
measure.


Acknowledgements
We would like to thanks the Management, Principal of Madanapalli Institute of
Technology and Science, Madanapalli, A.P., for their Cooperation and
Encouragement


References
[1]    L.R.Rabiner and R.W.Schafer, Digital processing of Speech signals, Droling
       Kindersly(india)pvt.Ltd.,licensees of pearson eduction in south asia, 1978, PP.
       54-101,412-460.
[2]    Thomas F. Quatieri, Discrete time speech signal processing principles and
       practice,2002, pp 56-59.
[3]    P. Ladefoged, R. Harshman, L. Goldstein, and L. Rice, “Generating vocal tract
       shapes from formant frequencies,” J. Acoust. Soc. Am., vol. 64, no. 4, , 1978,
       pp. 1027–1035.
[4]    Mayukh Bhaowal & Kunal Chawla Isolated word Recognition for English
       Language using LPC, Vq and HMM,pp.2-4.
[5]    G.E Peterson and H.L Barney,” control methods used in a study of the vowels
       ” J.Acoustic.Soc.Amer., Volume 24,PP.175-184
[6]    P.Rose,Long-and short-term within-speaker differences in the formants of
       Australian hello, j.Int. Phonetic. Assoc. 29(1) (1999) 1-31.
[7]    AhmedAli Safiullah Bhatti, dr.Munammad Sleem Miam. formants based
       Analysis for speech recognition, IEEE 2006.
382   G.N. Kodandaramaiah et al

Contenu connexe

Tendances

CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...ijcsit
 
Implementation of English-Text to Marathi-Speech (ETMS) Synthesizer
Implementation of English-Text to Marathi-Speech (ETMS) SynthesizerImplementation of English-Text to Marathi-Speech (ETMS) Synthesizer
Implementation of English-Text to Marathi-Speech (ETMS) SynthesizerIOSR Journals
 
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnnAutomatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnnijcsa
 
An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalAn Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
 
Survey On Speech Synthesis
Survey On Speech SynthesisSurvey On Speech Synthesis
Survey On Speech SynthesisCSCJournals
 
33 9765 development paper id 0034 (edit a) (1)
33 9765 development paper id 0034 (edit a) (1)33 9765 development paper id 0034 (edit a) (1)
33 9765 development paper id 0034 (edit a) (1)IAESIJEECS
 
A Marathi Hidden-Markov Model Based Speech Synthesis System
A Marathi Hidden-Markov Model Based Speech Synthesis SystemA Marathi Hidden-Markov Model Based Speech Synthesis System
A Marathi Hidden-Markov Model Based Speech Synthesis Systemiosrjce
 
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...CSCJournals
 
10122603 劉倪均Internet based grammar instruction in the esl classroom
10122603 劉倪均Internet based grammar instruction in the esl classroom10122603 劉倪均Internet based grammar instruction in the esl classroom
10122603 劉倪均Internet based grammar instruction in the esl classroomCathy Liu
 
ASA 09 Poster-portlandOR-051209
ASA 09 Poster-portlandOR-051209ASA 09 Poster-portlandOR-051209
ASA 09 Poster-portlandOR-051209Sangsook Choi
 
A novel automatic voice recognition system based on text-independent in a noi...
A novel automatic voice recognition system based on text-independent in a noi...A novel automatic voice recognition system based on text-independent in a noi...
A novel automatic voice recognition system based on text-independent in a noi...IJECEIAES
 
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorDynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
 
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...CSCJournals
 
Speaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained DataSpeaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained Datasipij
 

Tendances (16)

CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
 
Implementation of English-Text to Marathi-Speech (ETMS) Synthesizer
Implementation of English-Text to Marathi-Speech (ETMS) SynthesizerImplementation of English-Text to Marathi-Speech (ETMS) Synthesizer
Implementation of English-Text to Marathi-Speech (ETMS) Synthesizer
 
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnnAutomatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
 
An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity RemovalAn Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity Removal
 
Isolated English Word Recognition System: Appropriate for Bengali-accented En...
Isolated English Word Recognition System: Appropriate for Bengali-accented En...Isolated English Word Recognition System: Appropriate for Bengali-accented En...
Isolated English Word Recognition System: Appropriate for Bengali-accented En...
 
Survey On Speech Synthesis
Survey On Speech SynthesisSurvey On Speech Synthesis
Survey On Speech Synthesis
 
33 9765 development paper id 0034 (edit a) (1)
33 9765 development paper id 0034 (edit a) (1)33 9765 development paper id 0034 (edit a) (1)
33 9765 development paper id 0034 (edit a) (1)
 
A Marathi Hidden-Markov Model Based Speech Synthesis System
A Marathi Hidden-Markov Model Based Speech Synthesis SystemA Marathi Hidden-Markov Model Based Speech Synthesis System
A Marathi Hidden-Markov Model Based Speech Synthesis System
 
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
Teager Energy Operation on Wavelet Packet Coefficients for Enhancing Noisy Sp...
 
10122603 劉倪均Internet based grammar instruction in the esl classroom
10122603 劉倪均Internet based grammar instruction in the esl classroom10122603 劉倪均Internet based grammar instruction in the esl classroom
10122603 劉倪均Internet based grammar instruction in the esl classroom
 
ASA 09 Poster-portlandOR-051209
ASA 09 Poster-portlandOR-051209ASA 09 Poster-portlandOR-051209
ASA 09 Poster-portlandOR-051209
 
A novel automatic voice recognition system based on text-independent in a noi...
A novel automatic voice recognition system based on text-independent in a noi...A novel automatic voice recognition system based on text-independent in a noi...
A novel automatic voice recognition system based on text-independent in a noi...
 
Speaker Recognition Using Vocal Tract Features
Speaker Recognition Using Vocal Tract FeaturesSpeaker Recognition Using Vocal Tract Features
Speaker Recognition Using Vocal Tract Features
 
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorDynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text Editor
 
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
Real-time DSP Implementation of Audio Crosstalk Cancellation using Mixed Unif...
 
Speaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained DataSpeaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained Data
 

Similaire à Ijeer journal

Gender voice classification with huge accuracy rate
Gender voice classification with huge accuracy rateGender voice classification with huge accuracy rate
Gender voice classification with huge accuracy rateTELKOMNIKA JOURNAL
 
Speech Feature Extraction and Data Visualisation
Speech Feature Extraction and Data VisualisationSpeech Feature Extraction and Data Visualisation
Speech Feature Extraction and Data VisualisationITIIIndustries
 
A new framework based on KNN and DT for speech identification through emphat...
A new framework based on KNN and DT for speech  identification through emphat...A new framework based on KNN and DT for speech  identification through emphat...
A new framework based on KNN and DT for speech identification through emphat...nooriasukmaningtyas
 
Broad Phoneme Classification Using Signal Based Features
Broad Phoneme Classification Using Signal Based Features  Broad Phoneme Classification Using Signal Based Features
Broad Phoneme Classification Using Signal Based Features ijsc
 
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...IJERA Editor
 
Broad phoneme classification using signal based features
Broad phoneme classification using signal based featuresBroad phoneme classification using signal based features
Broad phoneme classification using signal based featuresijsc
 
Energy distribution in formant bands for arabic vowels
Energy distribution in formant bands for arabic vowelsEnergy distribution in formant bands for arabic vowels
Energy distribution in formant bands for arabic vowelsIJECEIAES
 
Automatic Speech Recognition of Malayalam Language Nasal Class Phonemes
Automatic Speech Recognition of Malayalam Language Nasal Class PhonemesAutomatic Speech Recognition of Malayalam Language Nasal Class Phonemes
Automatic Speech Recognition of Malayalam Language Nasal Class PhonemesEditor IJCATR
 
Estimation of Severity of Speech Disability Through Speech Envelope
Estimation of Severity of Speech Disability Through Speech EnvelopeEstimation of Severity of Speech Disability Through Speech Envelope
Estimation of Severity of Speech Disability Through Speech Envelopesipij
 
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...TELKOMNIKA JOURNAL
 
A comparative analysis of classifiers in emotion recognition thru acoustic fea...
A comparative analysis of classifiers in emotion recognition thru acoustic fea...A comparative analysis of classifiers in emotion recognition thru acoustic fea...
A comparative analysis of classifiers in emotion recognition thru acoustic fea...Pravena Duplex
 
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...cscpconf
 
A study of gender specific pitch variation pattern of emotion expression for ...
A study of gender specific pitch variation pattern of emotion expression for ...A study of gender specific pitch variation pattern of emotion expression for ...
A study of gender specific pitch variation pattern of emotion expression for ...IAEME Publication
 
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...cscpconf
 
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...sipij
 
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemes
Effect of Dynamic Time Warping on Alignment of Phrases and PhonemesEffect of Dynamic Time Warping on Alignment of Phrases and Phonemes
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemeskevig
 

Similaire à Ijeer journal (20)

Gender voice classification with huge accuracy rate
Gender voice classification with huge accuracy rateGender voice classification with huge accuracy rate
Gender voice classification with huge accuracy rate
 
Speech Feature Extraction and Data Visualisation
Speech Feature Extraction and Data VisualisationSpeech Feature Extraction and Data Visualisation
Speech Feature Extraction and Data Visualisation
 
A new framework based on KNN and DT for speech identification through emphat...
A new framework based on KNN and DT for speech  identification through emphat...A new framework based on KNN and DT for speech  identification through emphat...
A new framework based on KNN and DT for speech identification through emphat...
 
F010334548
F010334548F010334548
F010334548
 
Broad Phoneme Classification Using Signal Based Features
Broad Phoneme Classification Using Signal Based Features  Broad Phoneme Classification Using Signal Based Features
Broad Phoneme Classification Using Signal Based Features
 
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
Identification of Sex of the Speaker With Reference To Bodo Vowels: A Compara...
 
Int journal 01
Int journal 01Int journal 01
Int journal 01
 
Broad phoneme classification using signal based features
Broad phoneme classification using signal based featuresBroad phoneme classification using signal based features
Broad phoneme classification using signal based features
 
Energy distribution in formant bands for arabic vowels
Energy distribution in formant bands for arabic vowelsEnergy distribution in formant bands for arabic vowels
Energy distribution in formant bands for arabic vowels
 
Automatic Speech Recognition of Malayalam Language Nasal Class Phonemes
Automatic Speech Recognition of Malayalam Language Nasal Class PhonemesAutomatic Speech Recognition of Malayalam Language Nasal Class Phonemes
Automatic Speech Recognition of Malayalam Language Nasal Class Phonemes
 
T0 numtq0nzq=
T0 numtq0nzq=T0 numtq0nzq=
T0 numtq0nzq=
 
Estimation of Severity of Speech Disability Through Speech Envelope
Estimation of Severity of Speech Disability Through Speech EnvelopeEstimation of Severity of Speech Disability Through Speech Envelope
Estimation of Severity of Speech Disability Through Speech Envelope
 
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
Significance of Speech Intelligibility Assessors in Medium Classroom Using An...
 
A comparative analysis of classifiers in emotion recognition thru acoustic fea...
A comparative analysis of classifiers in emotion recognition thru acoustic fea...A comparative analysis of classifiers in emotion recognition thru acoustic fea...
A comparative analysis of classifiers in emotion recognition thru acoustic fea...
 
B110512
B110512B110512
B110512
 
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
DETECTION OF AUTOMATIC THE VOT VALUE FOR VOICED STOP SOUNDS IN MODERN STANDAR...
 
A study of gender specific pitch variation pattern of emotion expression for ...
A study of gender specific pitch variation pattern of emotion expression for ...A study of gender specific pitch variation pattern of emotion expression for ...
A study of gender specific pitch variation pattern of emotion expression for ...
 
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
ANALYSIS OF SPEECH UNDER STRESS USING LINEAR TECHNIQUES AND NON-LINEAR TECHNI...
 
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
Sipij040305SPEECH EVALUATION WITH SPECIAL FOCUS ON CHILDREN SUFFERING FROM AP...
 
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemes
Effect of Dynamic Time Warping on Alignment of Phrases and PhonemesEffect of Dynamic Time Warping on Alignment of Phrases and Phonemes
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemes
 

Dernier

Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 

Dernier (20)

Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 

Ijeer journal

  • 1. International Journal of Electronic Engineering Research ISSN 0975 - 6450 Volume 2 Number 3 (2010) pp. 377–381 © Research India Publications http://www.ripublication.com/ijeer.htm Independent Speaker Recognition for Native English Vowels 1 G.N. Kodandaramaiah, 2M.N. Giriprasad and 3M. Mukunda Rao 1 HOD, Department of Electronics and Communications Engineering, Madanapalli Institute of Technology, Madanapalli, India 2 Principal, Jawaharlal Nehru Technological University, Pulivendula, India 3 Honorary Research Professor, Biomedical Sciences Sri Ramachandra Medical College & Research Institute, Chennai, India E-mail: kodandramaiah@yahoo.com Abstract This paper presents the standard method for vocal tract shape estimation has been the basis for many successful automatic speech recognition (ASR) systems. Analytic results presented demonstrate that estimation of vocal tract shape, based on reflection co-efficients obtained from LPC analysis of speech, is satisfactory and is related to the place of articulation of the vowels. Here we describe a “standard” approach for classification of vowels based on formants, which are meaningfully distinguishable frequency components of human speech. These formant frequencies depend upon the shape and dimensions of the vocal tract, Vocal tract shape is characterized by a set of formant frequencies, and different sounds are produced by varying the shape of the vocal tract, leading to the property of spoken speech. It has been implemented in many of speech related applications such as, speech/speaker recognition. This work uses Euclidean distance measure, is applied in order to measure the similarity or the dissimilarity between two spoken words, which take place after quantizing a spoken word into its code book. Keywords: Speech, Vocal tract, Formants, Euclidean distance. Introduction The Fig 1.1 shows the block diagram of Independent Speaker Recognition for vowels. Let S(n) be the test sample of a vowel. Then parameters i.e. formants F1 and F2 are extracted.
  • 2. 378 G.N. Kodandaramaiah et al The extracted formants are compared with the threshold of reference formants. Euclidean distance measure is applied in order to measure the similarity or the dissimilarity between two spoken words, which take place after quantizing a spoken word into its code book. The matching of an unknown vowel is performed by measuring the Euclidean distance between the features vector (formants) of the unknown vowel to the reference model (codebook) of the known vowel formants F1, F2 in the database. The goal is to find the codebook that has the minimum distance measurement in order to identify the unknown vowel (Franti et al., 1997). For example, in the testing or identification session, the Euclidean distance between the features vector formants F1, F2 and codebook for each spoken vowel is calculated and the vowel with the smallest average minimum distance is picked as shown in the Eq. (1.1). Note that xi is the i th input features vector (formants F1, F2), yi is the i th features vector in the codebook (Reference Model) and distance d is the distance between xi and yi. d(x,y)=√[ ] 1.1 where D=2, xi is the ith input features vector (formants F1, F2), yi is the ith feature vector in the code book (Reference Model) and d is the distance between xi and yi.s,Wi = weight associated with ith feature vector, recognition score. Decision Rule The weights ‘w’ are important to use if the information contained in the underlining features is not proportional to the feature variances. In this case of vowel recognition based on formants F1 and F2, they do not uniformly contribute to vowel recognition. Based on study, relative weights-F1 =2; F2 =1 are given but normalized such that the sum of the weights is 1.0. We refer to classification based on this distance as Maximum Likelihood Regression, since this is based on Gaussian assumptions used to obtain the parameters in the classifier. To provide verification that the vowels displayed are producing accurate results, the MLR has calculated the distance of average features for the given vowels.If the feature distance is within the threshold criteria Di (F1, F2), then equation 1.1 becomes Di (f) < α√m 1.2 where m is number of features i.e. F1 and F2, α is arbitrary scale factor used for performance tuning. Then, the vector xi is identified as the vector yi, otherwise not. If it is too small the MLR rejects many correct vowel samples. If it is too large the output of category vowels will not be rejected. In our work the threshold α=x has given optimum results.
  • 3. Independent Speaker Recognition for Native English Vowels 379 Figure 1.1: Block diagram of vowel recognition. Result of Vowel Ecognition of Male and Female Speakers Male Speakers The table 3.1 gives the result for male vowel recognition based on MLR method. Vowel /a/ has achieved perfect classification compared to other vowels. The detection rate for vowel /u/ and /e/ is better than vowel /o/ and /i/ for all tested samples. Vowel /e/ and vowel /i/ tend to mis-classify with each other due to the variations of utterances from different inter-speakers. The Fig 3.1 shows vowel ‘X’ versus % vowel recognition for 50 male samples, where ‘X’ is the actual vowel. For vowel /a/, /a/ in /a/ is 46; /a/ in /e/ is 0; /a/ in /i/ is 4;/a/ in /o/ is 0;/a/ in /u/ is 0. Hence the percentage correctness of recognition of vowel /a/ is = ( /a/ in /a/ )* 100÷(/a/ in all the vowels) = 46 * 100÷(46+0+4+0+0) = 46*100/50 = 92 %. Figure 3.1: Vowel Vs % vowel recognition for male speaker.
  • 4. 380 G.N. Kodandaramaiah et al Table 3.1 Shows the percentage recognition for vowel of male speakers. vowels Predicted Actual /a/ /e/ /i/ /o/ /u/ % correct /a/ 46 0 4 0 0 92% /e/ 2 44 0 4 0 89% /i/ 6 0 40 0 4 80% /o/ 0 3 0 44 3 88% /u/ 3 1 1 0 45 90% Female Speakers The table 3.2 gives the result for female vowel recognition based on MLR method. Vowel /o/ has achieved perfect classification compared to other vowels. The detection rate for vowel /u/ and /e/ is better than vowel /a/ and /i/ for all tested samples. Vowel /a/ and vowel /i/ tend to mis-classify with each other due to the variations of utterances from different inter-speakers. The Fig 3.2 shows the percentage of recogniton of vowel for 40 female samples. For vowel /o/, /o/ in /a/ is 0; /o/ in /e/ is 0; /o/ in /i/ is 0;/o/ in /o/ is 39;/o/ in /u/ is 1. Hence the Percentage correctness of recognition of vowel /o/ is = ( /o/ in /o/ )* 100÷(/o/ in all the vowels) = 39 * 100÷(0+0+0+39+1) = 39*100/40 = 98 %. Figure 3.2: Vowel Vs % vowel recognition for female speaker.
  • 5. Independent Speaker Recognition for Native English Vowels 381 Table 3.2 Shows percentage recognition of vowel for female speakers Vowel /a/ /e/ /i/ /o/ /u/ %correct /a/ 34 4 0 0 2 85% /e/ 0 37 0 3 0 92% /i/ 0 4 34 0 2 86% /o/ 0 0 0 39 1 98% /u/ 3 0 0 0 37 94% Conclussion It was an attempt presents to the standard method for vocal tract shape estimation has been the basis for many successful automatic speech recognition (ASR) systems. Here we describe a “standard” approach for classification of vowels based on formants. We achieved 80 to 95 percentage of speaker recognition using Euclidean distance measure. Acknowledgements We would like to thanks the Management, Principal of Madanapalli Institute of Technology and Science, Madanapalli, A.P., for their Cooperation and Encouragement References [1] L.R.Rabiner and R.W.Schafer, Digital processing of Speech signals, Droling Kindersly(india)pvt.Ltd.,licensees of pearson eduction in south asia, 1978, PP. 54-101,412-460. [2] Thomas F. Quatieri, Discrete time speech signal processing principles and practice,2002, pp 56-59. [3] P. Ladefoged, R. Harshman, L. Goldstein, and L. Rice, “Generating vocal tract shapes from formant frequencies,” J. Acoust. Soc. Am., vol. 64, no. 4, , 1978, pp. 1027–1035. [4] Mayukh Bhaowal & Kunal Chawla Isolated word Recognition for English Language using LPC, Vq and HMM,pp.2-4. [5] G.E Peterson and H.L Barney,” control methods used in a study of the vowels ” J.Acoustic.Soc.Amer., Volume 24,PP.175-184 [6] P.Rose,Long-and short-term within-speaker differences in the formants of Australian hello, j.Int. Phonetic. Assoc. 29(1) (1999) 1-31. [7] AhmedAli Safiullah Bhatti, dr.Munammad Sleem Miam. formants based Analysis for speech recognition, IEEE 2006.
  • 6. 382 G.N. Kodandaramaiah et al