SlideShare une entreprise Scribd logo
1  sur  33
Télécharger pour lire hors ligne
A CROSS-LINGUAL ANNOTATION PROJECTION
APPROACH FOR RELATION DETECTION

   The 23rd International Conference on Computational Linguistics (COLING 2010)
                              August 24th, 2010, Beijing

                       Seokhwan Kim (POSTECH)
                     Minwoo Jeong (Saarland University)
                         Jonghoon Lee (POSTECH)
                       Gary Geunbae Lee (POSTECH)
Contents
• Introduction
• Methods
    Cross-lingual Annotation Projection for Relation Detection
    Noise Reduction Strategies
• Evaluation
• Conclusion




                                                                  2
Contents
• Introduction
• Methods
    Cross-lingual Annotation Projection for Relation Detection
    Noise Reduction Strategies
• Evaluation
• Conclusion




                                                                  3
What’s Relation Detection?
• Relation Extraction
    To identify semantic relations between a pair of entities
    ACE RDC
       • Relation Detection (RD)
       • Relation Categorization (RC)



                   Owner-Of

  Jan Mullins, owner of Computer Recycler Incorporated said that …




                                                                     4
What’s the Problem?
• Many supervised machine learning approaches have been
  successfully applied to the RDC task
    (Kambhatla, 2004; Zhou et al., 2005; Zelenko et al., 2003; Culotta
     and Sorensen, 2004; Bunescu and Mooney, 2005; Zhang et al.,
     2006)
• Datasets for relation detection
    Labeled corpora for supervised learning
    Available for only a few languages
       • English, Chinese, Arabic
    No resources for other languages
       • Korean


                                                                          5
Contents
• Introduction
• Methods
    Cross-lingual Annotation Projection for Relation Detection
    Noise Reduction Strategies
• Evaluation
• Conclusion




                                                                  6
Cross-lingual Annotation Projection
• Goal
   To learn the relation detector without significant annotation efforts
• Method
   To leverage parallel corpora to project the relation annotation on
    the source language LS to the target language LT




                                                                            7
Cross-lingual Annotation Projection
• Previous Work
    Part-of-speech tagging (Yarowsky and Ngai, 2001)
    Named-entity tagging (Yarowsky et al., 2001)
    Verb classification (Merlo et al., 2002)
    Dependency parsing (Hwa et al., 2005)
    Mention detection (Zitouni and Florian, 2008)
    Semantic role labeling (Pado and Lapata, 2009)
• To the best of our knowledge, no work has reported on the
  RDC task



                                                          8
Overall Architecture
Annotation                  Parallel
                                                       Projection
                            Corpus


         Sentences in                   Sentences in
               Ls                            Lt



         Preprocessing                 Preprocessing
         (POS Tagging,                 (POS Tagging,
            Parsing)                      Parsing)




             NER                       Word Alignment




       Relation Detection                Projection



          Annotated                      Annotated
         Sentences in                   Sentences in
               Ls                            Lt                     9
How to Reduce Noise?
• Error Accumulation
    Numerous errors can be generated and accumulated through a
     procedure of annotation projection
      • Preprocessing for LS and LT
      • NER for LS
      • Relation Detection for LS
      • Word Alignment between LS and LT

• Noise Reduction
    A key factor to improve the performance of annotation projection




                                                                        10
How to Reduce Noise?
• Noise Reduction Strategies (1)
    Alignment Filtering
        • Based on Heuristics
                 A projection for an entity mention should be based on alignments between
                  contiguous word sequences




     accepted      rejected




                                                                                        11
How to Reduce Noise?
• Noise Reduction Strategies (1)
    Alignment Filtering
        • Based on Heuristics
                 A projection for an entity mention should be based on alignments between
                  contiguous word sequences
                 Both an entity mention in LS and its projection in LT should include at
                  least one base noun phrase




                                          N   N      N   N



     accepted      rejected           accepted    rejected


                                          N




                                                                                            12
How to Reduce Noise?
• Noise Reduction Strategies (1)
    Alignment Filtering
        • Based on Heuristics
                 A projection for an entity mention should be based on alignments between
                  contiguous word sequences
                 Both an entity mention in LS and its projection in LT should include at
                  least one base noun phrase
                 The projected instance in LT should satisfy the clausal agreement with the
                  original instance in LS

                                          N   N      N   N



     accepted      rejected           accepted    rejected                rejected


                                          N




                                                                                            13
How to Reduce Noise?
• Noise Reduction Strategies (2)
    Alignment Correction
       • Based on a bilingual dictionary for entity mentions
            Each entry of the dictionary is a pair of entity mention in LS and its
             translation or transliteration in LT


   FOR each entity ES in LS                                A    B    C D       E   F   G
      RETRIEVE counterpart ET from DICT(E-T)
      SEEK ET from the sentence ST in LT
      IF matched THEN                                                                  BCD - βγ
          MAKE new alignment ES-ET
      ENDIF
   ENDFOR                                                  α    β    γ     δ   ε   δ   ε
                                                               corrected




                                                                                                  14
How to Reduce Noise?
• Noise Reduction Strategies (3)
    Assessment-based Instance Selection
      • Based on the reliability of a projected instances in LT
           Evaluated by the confidence score of monolingual relation detection for
            the original counterpart instance in LS
           Only instances with larger scores than threshold value θ are accepted


                    conf = 0.9                            conf = 0.6




                                         θ = 0.7
                     accepted                               rejected




                                                                                      15
Contents
• Introduction
• Methods
    Cross-lingual Annotation Projection for Relation Detection
    Noise Reduction Strategies
• Evaluation
• Conclusion




                                                                  16
Experimental Setup
• Dataset
    English-Korean parallel corpus
       • 454,315 bi-sentence pairs in English and Korean
       • Aligned by GIZA++
    Korean RDC corpus
       • Annotated following LDC guideline for ACE RDC corpus
       • 100 news documents in Korean
             835 sentences
             3,331 entity mentions
             8,354 relation instances




                                                                17
Experimental Setup
• Preprocessors
    English
      • Stanford Parser (Klein and Manning, 2003)
      • Stanford Named Entity Recognizer (Finkel et al., 2005)
    Korean
      • Korean POS Tagger (Lee et al., 2002)
      • MST Parser (R. McDonald et al., 2006)




                                                                 18
Experimental Setup
• Relation Detection for English Sentences
    Tree kernel-based SVM classifier
       • Training Dataset
            ACE 2003 corpus
                 • 674 documents
                 • 9,683 relation instances
       • Model
            Shortest path enclosed subtrees kernel (Zhang et al., 2006)
       • Implementation
            SVM-Light (Joachims, 1998)
            Tree Kernel Tools (Moschitti, 2006)




                                                                           19
Experimental Setup
• Relation Detection for Korean Sentences
    Tree kernel-based SVM classifier
       • Training Dataset
            Half of the Korean RDC corpus (baseline)
            Projected instances
       • Model
            Shortest path dependency kernel (Bunescu and Mooney, 2005)
       • Implementation
            SVM-Light (Joachims, 1998)
            Tree Kernel Tools (Moschitti, 2006)




                                                                          20
Experimental Setup
• Experimental Sets
    Combinations of noise reduction strategies
      • (S1: Heuristic, S2: Dictionary, S3: Assessment)
      1. Baseline
             Trained with only half of the Korean RDC corpus
      2. Baseline + Projections (no noise reduction)
      3. Baseline + Projections (S1)
      4. Baseline + Projections (S1 + S2)
      5. Baseline + Projections (S3)
      6. Baseline + Projections (S1 + S3)
      7. Baseline + Projections (S1 + S2 + S3)



                                                                21
Experimental Setup
• Evaluation
    On the second half of the Korean RDC corpus
       • The first half is for the baseline
    On true entity mentions with true chaining of coreference
    Evaluated by Precision/Recall/F-measure




                                                                 22
Experimental Results

                             no assessment       with assessment
         Model
                             P      R      F      P      R      F

        baseline            60.5   20.4   30.5    -      -      -

 baseline + projection      22.5   6.5    10.0   29.1   13.2   18.2

 Baseline + projection
                            51.4   15.5   23.8   56.1   22.9   32.5
      (heuristics)
 Baseline + projection
                            55.3   19.4   28.7   59.8   26.7   36.9
(heuristics + dictionary)




                                                                      23
Non-filtered Projects were Poor

                              no assessment       with assessment
          Model
                              P      R      F      P      R      F

         baseline            60.5   20.4   30.5    -      -      -

  baseline + projection      22.5   6.5    10.0   29.1   13.2   18.2

  Baseline + projection
                             51.4   15.5   23.8   56.1   22.9   32.5
       (heuristics)
  Baseline + projection
                             55.3   19.4   28.7   59.8   26.7   36.9
 (heuristics + dictionary)




                                                                       24
Heuristics Were Helpful

                             no assessment       with assessment
         Model
                             P      R      F      P      R      F

        baseline            60.5   20.4   30.5    -      -      -

 baseline + projection      22.5   6.5    10.0   29.1   13.2   18.2

 Baseline + projection
                            51.4   15.5   23.8   56.1   22.9   32.5
      (heuristics)
 Baseline + projection
                            55.3   19.4   28.7   59.8   26.7   36.9
(heuristics + dictionary)




                                                                      25
Much Worse Than Baseline

                             no assessment       with assessment
         Model
                             P      R      F      P      R      F

        baseline            60.5   20.4   30.5    -      -      -

 baseline + projection      22.5   6.5    10.0   29.1   13.2   18.2

 Baseline + projection
                            51.4   15.5   23.8   56.1   22.9   32.5
      (heuristics)
 Baseline + projection
                            55.3   19.4   28.7   59.8   26.7   36.9
(heuristics + dictionary)




                                                                      26
Dictionary Was Also Helpful

                             no assessment       with assessment
         Model
                             P      R      F      P      R      F

        baseline            60.5   20.4   30.5    -      -      -

 baseline + projection      22.5   6.5    10.0   29.1   13.2   18.2

 Baseline + projection
                            51.4   15.5   23.8   56.1   22.9   32.5
      (heuristics)
 Baseline + projection
                            55.3   19.4   28.7   59.8   26.7   36.9
(heuristics + dictionary)




                                                                      27
Still Worse Than Baseline

                             no assessment       with assessment
         Model
                             P      R      F      P      R      F

        baseline            60.5   20.4   30.5    -      -      -

 baseline + projection      22.5   6.5    10.0   29.1   13.2   18.2

 Baseline + projection
                            51.4   15.5   23.8   56.1   22.9   32.5
      (heuristics)
 Baseline + projection
                            55.3   19.4   28.7   59.8   26.7   36.9
(heuristics + dictionary)




                                                                      28
Assessment Boosted Performance


                             no assessment       with assessment
         Model
                             P      R      F      P      R      F

        baseline            60.5   20.4   30.5    -      -      -

 baseline + projection      22.5   6.5    10.0   29.1   13.2   18.2

 Baseline + projection
                            51.4   15.5   23.8   56.1   22.9   32.5
      (heuristics)
 Baseline + projection
                            55.3   19.4   28.7   59.8   26.7   36.9
(heuristics + dictionary)




                                                                      29
Combined Strategies Achieved
  Better Performance Then Baseline


                             no assessment       with assessment
         Model
                             P      R      F      P      R      F

        baseline            60.5   20.4   30.5    -      -      -

 baseline + projection      22.5   6.5    10.0   29.1   13.2   18.2

 Baseline + projection
                            51.4   15.5   23.8   56.1   22.9   32.5
      (heuristics)
 Baseline + projection
                            55.3   19.4   28.7   59.8   26.7   36.9
(heuristics + dictionary)




                                                                      30
Contents
• Introduction
• Methods
    Cross-lingual Annotation Projection for Relation Detection
    Noise Reduction Strategies
• Evaluation
• Conclusion




                                                                  31
Conclusion
• Summary
    A cross-lingual annotation projection for relation detection
    Three strategies for noise reduction
    Projected instances from an English-Korean parallel corpus helped
     to improve the performance of the task
       • with the noise reduction strategies

• Future work
    A cross-lingual annotation projection for relation categorization
    More elaborate strategies for noise reduction to improve the
     projection performance for relation extraction



                                                                         32
Q&A

Contenu connexe

Tendances

ACL読み会2014@PFI "Less Grammar, More Features"
ACL読み会2014@PFI "Less Grammar, More Features"ACL読み会2014@PFI "Less Grammar, More Features"
ACL読み会2014@PFI "Less Grammar, More Features"
nozyh
 
Amharic WSD using WordNet
Amharic WSD using WordNetAmharic WSD using WordNet
Amharic WSD using WordNet
Seid Hassen
 
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...
kevig
 
An Improved Approach to Word Sense Disambiguation
An Improved Approach to Word Sense DisambiguationAn Improved Approach to Word Sense Disambiguation
An Improved Approach to Word Sense Disambiguation
Surabhi Verma
 

Tendances (20)

Learning to understand phrases by embedding the dictionary
Learning to understand phrases by embedding the dictionaryLearning to understand phrases by embedding the dictionary
Learning to understand phrases by embedding the dictionary
 
ACL読み会2014@PFI "Less Grammar, More Features"
ACL読み会2014@PFI "Less Grammar, More Features"ACL読み会2014@PFI "Less Grammar, More Features"
ACL読み会2014@PFI "Less Grammar, More Features"
 
Pronominal Anaphora resolution
Pronominal Anaphora resolutionPronominal Anaphora resolution
Pronominal Anaphora resolution
 
Multi modal retrieval and generation with deep distributed models
Multi modal retrieval and generation with deep distributed modelsMulti modal retrieval and generation with deep distributed models
Multi modal retrieval and generation with deep distributed models
 
NLP Bootcamp
NLP BootcampNLP Bootcamp
NLP Bootcamp
 
Natural language processing with python and amharic syntax parse tree by dani...
Natural language processing with python and amharic syntax parse tree by dani...Natural language processing with python and amharic syntax parse tree by dani...
Natural language processing with python and amharic syntax parse tree by dani...
 
dialogue act modeling for automatic tagging and recognition
 dialogue act modeling for automatic tagging and recognition dialogue act modeling for automatic tagging and recognition
dialogue act modeling for automatic tagging and recognition
 
Amharic WSD using WordNet
Amharic WSD using WordNetAmharic WSD using WordNet
Amharic WSD using WordNet
 
Latest trends in NLP - Exploring BERT
Latest trends in NLP -  Exploring BERTLatest trends in NLP -  Exploring BERT
Latest trends in NLP - Exploring BERT
 
Tiancheng Zhao - 2017 - Learning Discourse-level Diversity for Neural Dialog...
Tiancheng Zhao - 2017 -  Learning Discourse-level Diversity for Neural Dialog...Tiancheng Zhao - 2017 -  Learning Discourse-level Diversity for Neural Dialog...
Tiancheng Zhao - 2017 - Learning Discourse-level Diversity for Neural Dialog...
 
A survey on parallel corpora alignment
A survey on parallel corpora alignment A survey on parallel corpora alignment
A survey on parallel corpora alignment
 
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...
 
Deep Learning, an interactive introduction for NLP-ers
Deep Learning, an interactive introduction for NLP-ersDeep Learning, an interactive introduction for NLP-ers
Deep Learning, an interactive introduction for NLP-ers
 
An Improved Approach to Word Sense Disambiguation
An Improved Approach to Word Sense DisambiguationAn Improved Approach to Word Sense Disambiguation
An Improved Approach to Word Sense Disambiguation
 
Information Retrieval with Deep Learning
Information Retrieval with Deep LearningInformation Retrieval with Deep Learning
Information Retrieval with Deep Learning
 
Deep Learning for Natural Language Processing: Word Embeddings
Deep Learning for Natural Language Processing: Word EmbeddingsDeep Learning for Natural Language Processing: Word Embeddings
Deep Learning for Natural Language Processing: Word Embeddings
 
Anthiil Inside workshop on NLP
Anthiil Inside workshop on NLPAnthiil Inside workshop on NLP
Anthiil Inside workshop on NLP
 
Modality
ModalityModality
Modality
 
Finite Wordlength Linear-Phase FIR Filter Design Using Babai's Algorithm
Finite Wordlength Linear-Phase FIR Filter Design Using Babai's AlgorithmFinite Wordlength Linear-Phase FIR Filter Design Using Babai's Algorithm
Finite Wordlength Linear-Phase FIR Filter Design Using Babai's Algorithm
 
Deep learning for natural language embeddings
Deep learning for natural language embeddingsDeep learning for natural language embeddings
Deep learning for natural language embeddings
 

En vedette

En vedette (8)

jiaju.com首页前端优化一期报告
jiaju.com首页前端优化一期报告jiaju.com首页前端优化一期报告
jiaju.com首页前端优化一期报告
 
Wikipedia-based Kernels for Dialogue Topic Tracking
Wikipedia-based Kernels for Dialogue Topic TrackingWikipedia-based Kernels for Dialogue Topic Tracking
Wikipedia-based Kernels for Dialogue Topic Tracking
 
Дипломная Работа: Guerrilla Marketing
Дипломная Работа: Guerrilla MarketingДипломная Работа: Guerrilla Marketing
Дипломная Работа: Guerrilla Marketing
 
A Cross-lingual Annotation Projection-based Self-supervision Approach for Ope...
A Cross-lingual Annotation Projection-based Self-supervision Approach for Ope...A Cross-lingual Annotation Projection-based Self-supervision Approach for Ope...
A Cross-lingual Annotation Projection-based Self-supervision Approach for Ope...
 
офис мечты
офис мечтыофис мечты
офис мечты
 
Cancer al utero
Cancer al uteroCancer al utero
Cancer al utero
 
张所勇:前端开发工具推荐
张所勇:前端开发工具推荐张所勇:前端开发工具推荐
张所勇:前端开发工具推荐
 
EPG 정보 검색을 위한 예제 기반 자연어 대화 시스템
EPG 정보 검색을 위한 예제 기반 자연어 대화 시스템EPG 정보 검색을 위한 예제 기반 자연어 대화 시스템
EPG 정보 검색을 위한 예제 기반 자연어 대화 시스템
 

Similaire à A Cross-Lingual Annotation Projection Approach for Relation Detection

Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...
Christoph Lange
 
Improvement wsd dictionary using annotated corpus and testing it with simplif...
Improvement wsd dictionary using annotated corpus and testing it with simplif...Improvement wsd dictionary using annotated corpus and testing it with simplif...
Improvement wsd dictionary using annotated corpus and testing it with simplif...
csandit
 
Ekaw ontology learning for cost effective large-scale semantic annotation
Ekaw ontology learning for cost effective large-scale semantic annotationEkaw ontology learning for cost effective large-scale semantic annotation
Ekaw ontology learning for cost effective large-scale semantic annotation
Shahab Mokarizadeh
 

Similaire à A Cross-Lingual Annotation Projection Approach for Relation Detection (20)

2023 EMNLP day_san.pptx
2023 EMNLP day_san.pptx2023 EMNLP day_san.pptx
2023 EMNLP day_san.pptx
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Warnikchow - SAIT - 0529
Warnikchow - SAIT - 0529Warnikchow - SAIT - 0529
Warnikchow - SAIT - 0529
 
Intrinsic and Extrinsic Evaluations of Word Embeddings
Intrinsic and Extrinsic Evaluations of Word EmbeddingsIntrinsic and Extrinsic Evaluations of Word Embeddings
Intrinsic and Extrinsic Evaluations of Word Embeddings
 
AINL 2016: Eyecioglu
AINL 2016: EyeciogluAINL 2016: Eyecioglu
AINL 2016: Eyecioglu
 
Nltk natural language toolkit overview and application @ PyCon.tw 2012
Nltk  natural language toolkit overview and application @ PyCon.tw 2012Nltk  natural language toolkit overview and application @ PyCon.tw 2012
Nltk natural language toolkit overview and application @ PyCon.tw 2012
 
Automated Abstracts and Big Data
Automated Abstracts and Big DataAutomated Abstracts and Big Data
Automated Abstracts and Big Data
 
Formal languages
Formal languagesFormal languages
Formal languages
 
Word Segmentation and Lexical Normalization for Unsegmented Languages
Word Segmentation and Lexical Normalization for Unsegmented LanguagesWord Segmentation and Lexical Normalization for Unsegmented Languages
Word Segmentation and Lexical Normalization for Unsegmented Languages
 
ESR11 Hoang Cuong - EXPERT Summer School - Malaga 2015
ESR11 Hoang Cuong - EXPERT Summer School - Malaga 2015ESR11 Hoang Cuong - EXPERT Summer School - Malaga 2015
ESR11 Hoang Cuong - EXPERT Summer School - Malaga 2015
 
Deep Learning勉強会@小町研 "Learning Character-level Representations for Part-of-Sp...
Deep Learning勉強会@小町研 "Learning Character-level Representations for Part-of-Sp...Deep Learning勉強会@小町研 "Learning Character-level Representations for Part-of-Sp...
Deep Learning勉強会@小町研 "Learning Character-level Representations for Part-of-Sp...
 
The CLUES database: automated search for linguistic cognates
The CLUES database: automated search for linguistic cognatesThe CLUES database: automated search for linguistic cognates
The CLUES database: automated search for linguistic cognates
 
Semi-Supervised Autoencoders for Predicting Sentiment Distributions(第 5 回 De...
 Semi-Supervised Autoencoders for Predicting Sentiment Distributions(第 5 回 De... Semi-Supervised Autoencoders for Predicting Sentiment Distributions(第 5 回 De...
Semi-Supervised Autoencoders for Predicting Sentiment Distributions(第 5 回 De...
 
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...
 
The Distributed Ontology Language (DOL): Use Cases, Syntax, and Extensibility
The Distributed Ontology Language (DOL): Use Cases, Syntax, and ExtensibilityThe Distributed Ontology Language (DOL): Use Cases, Syntax, and Extensibility
The Distributed Ontology Language (DOL): Use Cases, Syntax, and Extensibility
 
Conversational transfer learning for emotion recognition
Conversational transfer learning for emotion recognitionConversational transfer learning for emotion recognition
Conversational transfer learning for emotion recognition
 
Latent Relational Model for Relation Extraction
Latent Relational Model for Relation ExtractionLatent Relational Model for Relation Extraction
Latent Relational Model for Relation Extraction
 
Improvement wsd dictionary using annotated corpus and testing it with simplif...
Improvement wsd dictionary using annotated corpus and testing it with simplif...Improvement wsd dictionary using annotated corpus and testing it with simplif...
Improvement wsd dictionary using annotated corpus and testing it with simplif...
 
Ekaw ontology learning for cost effective large-scale semantic annotation
Ekaw ontology learning for cost effective large-scale semantic annotationEkaw ontology learning for cost effective large-scale semantic annotation
Ekaw ontology learning for cost effective large-scale semantic annotation
 
Phrase break prediction with bidirectional encoder representations in Japanes...
Phrase break prediction with bidirectional encoder representations in Japanes...Phrase break prediction with bidirectional encoder representations in Japanes...
Phrase break prediction with bidirectional encoder representations in Japanes...
 

Plus de Seokhwan Kim

Deep Recurrent Neural Networks with Layer-wise Multi-head Attentions for Punc...
Deep Recurrent Neural Networks with Layer-wise Multi-head Attentions for Punc...Deep Recurrent Neural Networks with Layer-wise Multi-head Attentions for Punc...
Deep Recurrent Neural Networks with Layer-wise Multi-head Attentions for Punc...
Seokhwan Kim
 
Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling...
Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling...Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling...
Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling...
Seokhwan Kim
 
The Fourth Dialog State Tracking Challenge (DSTC4)
The Fourth Dialog State Tracking Challenge (DSTC4)The Fourth Dialog State Tracking Challenge (DSTC4)
The Fourth Dialog State Tracking Challenge (DSTC4)
Seokhwan Kim
 
Wikification of Concept Mentions within Spoken Dialogues Using Domain Constra...
Wikification of Concept Mentions within Spoken Dialogues Using Domain Constra...Wikification of Concept Mentions within Spoken Dialogues Using Domain Constra...
Wikification of Concept Mentions within Spoken Dialogues Using Domain Constra...
Seokhwan Kim
 
Towards Improving Dialogue Topic Tracking Performances with Wikification of C...
Towards Improving Dialogue Topic Tracking Performances with Wikification of C...Towards Improving Dialogue Topic Tracking Performances with Wikification of C...
Towards Improving Dialogue Topic Tracking Performances with Wikification of C...
Seokhwan Kim
 
A Graph-based Cross-lingual Projection Approach for Spoken Language Understan...
A Graph-based Cross-lingual Projection Approach for Spoken Language Understan...A Graph-based Cross-lingual Projection Approach for Spoken Language Understan...
A Graph-based Cross-lingual Projection Approach for Spoken Language Understan...
Seokhwan Kim
 
MMR-based active machine learning for Bio named entity recognition
MMR-based active machine learning for Bio named entity recognitionMMR-based active machine learning for Bio named entity recognition
MMR-based active machine learning for Bio named entity recognition
Seokhwan Kim
 
A semi-supervised method for efficient construction of statistical spoken lan...
A semi-supervised method for efficient construction of statistical spoken lan...A semi-supervised method for efficient construction of statistical spoken lan...
A semi-supervised method for efficient construction of statistical spoken lan...
Seokhwan Kim
 
A spoken dialog system for electronic program guide information access
A spoken dialog system for electronic program guide information accessA spoken dialog system for electronic program guide information access
A spoken dialog system for electronic program guide information access
Seokhwan Kim
 
An alignment-based approach to semi-supervised relation extraction including ...
An alignment-based approach to semi-supervised relation extraction including ...An alignment-based approach to semi-supervised relation extraction including ...
An alignment-based approach to semi-supervised relation extraction including ...
Seokhwan Kim
 
An Alignment-based Pattern Representation Model for Information Extraction
An Alignment-based Pattern Representation Model for Information ExtractionAn Alignment-based Pattern Representation Model for Information Extraction
An Alignment-based Pattern Representation Model for Information Extraction
Seokhwan Kim
 

Plus de Seokhwan Kim (17)

The Eighth Dialog System Technology Challenge (DSTC8)
The Eighth Dialog System Technology Challenge (DSTC8)The Eighth Dialog System Technology Challenge (DSTC8)
The Eighth Dialog System Technology Challenge (DSTC8)
 
Deep Recurrent Neural Networks with Layer-wise Multi-head Attentions for Punc...
Deep Recurrent Neural Networks with Layer-wise Multi-head Attentions for Punc...Deep Recurrent Neural Networks with Layer-wise Multi-head Attentions for Punc...
Deep Recurrent Neural Networks with Layer-wise Multi-head Attentions for Punc...
 
Dynamic Memory Networks for Dialogue Topic Tracking
Dynamic Memory Networks for Dialogue Topic TrackingDynamic Memory Networks for Dialogue Topic Tracking
Dynamic Memory Networks for Dialogue Topic Tracking
 
The Fifth Dialog State Tracking Challenge (DSTC5)
The Fifth Dialog State Tracking Challenge (DSTC5)The Fifth Dialog State Tracking Challenge (DSTC5)
The Fifth Dialog State Tracking Challenge (DSTC5)
 
Natural Language in Human-Robot Interaction
Natural Language in Human-Robot InteractionNatural Language in Human-Robot Interaction
Natural Language in Human-Robot Interaction
 
Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling...
Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling...Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling...
Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling...
 
The Fourth Dialog State Tracking Challenge (DSTC4)
The Fourth Dialog State Tracking Challenge (DSTC4)The Fourth Dialog State Tracking Challenge (DSTC4)
The Fourth Dialog State Tracking Challenge (DSTC4)
 
Wikification of Concept Mentions within Spoken Dialogues Using Domain Constra...
Wikification of Concept Mentions within Spoken Dialogues Using Domain Constra...Wikification of Concept Mentions within Spoken Dialogues Using Domain Constra...
Wikification of Concept Mentions within Spoken Dialogues Using Domain Constra...
 
Towards Improving Dialogue Topic Tracking Performances with Wikification of C...
Towards Improving Dialogue Topic Tracking Performances with Wikification of C...Towards Improving Dialogue Topic Tracking Performances with Wikification of C...
Towards Improving Dialogue Topic Tracking Performances with Wikification of C...
 
A Composite Kernel Approach for Dialog Topic Tracking with Structured Domain ...
A Composite Kernel Approach for Dialog Topic Tracking with Structured Domain ...A Composite Kernel Approach for Dialog Topic Tracking with Structured Domain ...
A Composite Kernel Approach for Dialog Topic Tracking with Structured Domain ...
 
Sequential Labeling for Tracking Dynamic Dialog States
Sequential Labeling for Tracking Dynamic Dialog StatesSequential Labeling for Tracking Dynamic Dialog States
Sequential Labeling for Tracking Dynamic Dialog States
 
A Graph-based Cross-lingual Projection Approach for Spoken Language Understan...
A Graph-based Cross-lingual Projection Approach for Spoken Language Understan...A Graph-based Cross-lingual Projection Approach for Spoken Language Understan...
A Graph-based Cross-lingual Projection Approach for Spoken Language Understan...
 
MMR-based active machine learning for Bio named entity recognition
MMR-based active machine learning for Bio named entity recognitionMMR-based active machine learning for Bio named entity recognition
MMR-based active machine learning for Bio named entity recognition
 
A semi-supervised method for efficient construction of statistical spoken lan...
A semi-supervised method for efficient construction of statistical spoken lan...A semi-supervised method for efficient construction of statistical spoken lan...
A semi-supervised method for efficient construction of statistical spoken lan...
 
A spoken dialog system for electronic program guide information access
A spoken dialog system for electronic program guide information accessA spoken dialog system for electronic program guide information access
A spoken dialog system for electronic program guide information access
 
An alignment-based approach to semi-supervised relation extraction including ...
An alignment-based approach to semi-supervised relation extraction including ...An alignment-based approach to semi-supervised relation extraction including ...
An alignment-based approach to semi-supervised relation extraction including ...
 
An Alignment-based Pattern Representation Model for Information Extraction
An Alignment-based Pattern Representation Model for Information ExtractionAn Alignment-based Pattern Representation Model for Information Extraction
An Alignment-based Pattern Representation Model for Information Extraction
 

Dernier

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Dernier (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 

A Cross-Lingual Annotation Projection Approach for Relation Detection

  • 1. A CROSS-LINGUAL ANNOTATION PROJECTION APPROACH FOR RELATION DETECTION The 23rd International Conference on Computational Linguistics (COLING 2010) August 24th, 2010, Beijing Seokhwan Kim (POSTECH) Minwoo Jeong (Saarland University) Jonghoon Lee (POSTECH) Gary Geunbae Lee (POSTECH)
  • 2. Contents • Introduction • Methods  Cross-lingual Annotation Projection for Relation Detection  Noise Reduction Strategies • Evaluation • Conclusion 2
  • 3. Contents • Introduction • Methods  Cross-lingual Annotation Projection for Relation Detection  Noise Reduction Strategies • Evaluation • Conclusion 3
  • 4. What’s Relation Detection? • Relation Extraction  To identify semantic relations between a pair of entities  ACE RDC • Relation Detection (RD) • Relation Categorization (RC) Owner-Of Jan Mullins, owner of Computer Recycler Incorporated said that … 4
  • 5. What’s the Problem? • Many supervised machine learning approaches have been successfully applied to the RDC task  (Kambhatla, 2004; Zhou et al., 2005; Zelenko et al., 2003; Culotta and Sorensen, 2004; Bunescu and Mooney, 2005; Zhang et al., 2006) • Datasets for relation detection  Labeled corpora for supervised learning  Available for only a few languages • English, Chinese, Arabic  No resources for other languages • Korean 5
  • 6. Contents • Introduction • Methods  Cross-lingual Annotation Projection for Relation Detection  Noise Reduction Strategies • Evaluation • Conclusion 6
  • 7. Cross-lingual Annotation Projection • Goal  To learn the relation detector without significant annotation efforts • Method  To leverage parallel corpora to project the relation annotation on the source language LS to the target language LT 7
  • 8. Cross-lingual Annotation Projection • Previous Work  Part-of-speech tagging (Yarowsky and Ngai, 2001)  Named-entity tagging (Yarowsky et al., 2001)  Verb classification (Merlo et al., 2002)  Dependency parsing (Hwa et al., 2005)  Mention detection (Zitouni and Florian, 2008)  Semantic role labeling (Pado and Lapata, 2009) • To the best of our knowledge, no work has reported on the RDC task 8
  • 9. Overall Architecture Annotation Parallel Projection Corpus Sentences in Sentences in Ls Lt Preprocessing Preprocessing (POS Tagging, (POS Tagging, Parsing) Parsing) NER Word Alignment Relation Detection Projection Annotated Annotated Sentences in Sentences in Ls Lt 9
  • 10. How to Reduce Noise? • Error Accumulation  Numerous errors can be generated and accumulated through a procedure of annotation projection • Preprocessing for LS and LT • NER for LS • Relation Detection for LS • Word Alignment between LS and LT • Noise Reduction  A key factor to improve the performance of annotation projection 10
  • 11. How to Reduce Noise? • Noise Reduction Strategies (1)  Alignment Filtering • Based on Heuristics  A projection for an entity mention should be based on alignments between contiguous word sequences accepted rejected 11
  • 12. How to Reduce Noise? • Noise Reduction Strategies (1)  Alignment Filtering • Based on Heuristics  A projection for an entity mention should be based on alignments between contiguous word sequences  Both an entity mention in LS and its projection in LT should include at least one base noun phrase N N N N accepted rejected accepted rejected N 12
  • 13. How to Reduce Noise? • Noise Reduction Strategies (1)  Alignment Filtering • Based on Heuristics  A projection for an entity mention should be based on alignments between contiguous word sequences  Both an entity mention in LS and its projection in LT should include at least one base noun phrase  The projected instance in LT should satisfy the clausal agreement with the original instance in LS N N N N accepted rejected accepted rejected rejected N 13
  • 14. How to Reduce Noise? • Noise Reduction Strategies (2)  Alignment Correction • Based on a bilingual dictionary for entity mentions  Each entry of the dictionary is a pair of entity mention in LS and its translation or transliteration in LT FOR each entity ES in LS A B C D E F G RETRIEVE counterpart ET from DICT(E-T) SEEK ET from the sentence ST in LT IF matched THEN BCD - βγ MAKE new alignment ES-ET ENDIF ENDFOR α β γ δ ε δ ε corrected 14
  • 15. How to Reduce Noise? • Noise Reduction Strategies (3)  Assessment-based Instance Selection • Based on the reliability of a projected instances in LT  Evaluated by the confidence score of monolingual relation detection for the original counterpart instance in LS  Only instances with larger scores than threshold value θ are accepted conf = 0.9 conf = 0.6 θ = 0.7 accepted rejected 15
  • 16. Contents • Introduction • Methods  Cross-lingual Annotation Projection for Relation Detection  Noise Reduction Strategies • Evaluation • Conclusion 16
  • 17. Experimental Setup • Dataset  English-Korean parallel corpus • 454,315 bi-sentence pairs in English and Korean • Aligned by GIZA++  Korean RDC corpus • Annotated following LDC guideline for ACE RDC corpus • 100 news documents in Korean  835 sentences  3,331 entity mentions  8,354 relation instances 17
  • 18. Experimental Setup • Preprocessors  English • Stanford Parser (Klein and Manning, 2003) • Stanford Named Entity Recognizer (Finkel et al., 2005)  Korean • Korean POS Tagger (Lee et al., 2002) • MST Parser (R. McDonald et al., 2006) 18
  • 19. Experimental Setup • Relation Detection for English Sentences  Tree kernel-based SVM classifier • Training Dataset  ACE 2003 corpus • 674 documents • 9,683 relation instances • Model  Shortest path enclosed subtrees kernel (Zhang et al., 2006) • Implementation  SVM-Light (Joachims, 1998)  Tree Kernel Tools (Moschitti, 2006) 19
  • 20. Experimental Setup • Relation Detection for Korean Sentences  Tree kernel-based SVM classifier • Training Dataset  Half of the Korean RDC corpus (baseline)  Projected instances • Model  Shortest path dependency kernel (Bunescu and Mooney, 2005) • Implementation  SVM-Light (Joachims, 1998)  Tree Kernel Tools (Moschitti, 2006) 20
  • 21. Experimental Setup • Experimental Sets  Combinations of noise reduction strategies • (S1: Heuristic, S2: Dictionary, S3: Assessment) 1. Baseline  Trained with only half of the Korean RDC corpus 2. Baseline + Projections (no noise reduction) 3. Baseline + Projections (S1) 4. Baseline + Projections (S1 + S2) 5. Baseline + Projections (S3) 6. Baseline + Projections (S1 + S3) 7. Baseline + Projections (S1 + S2 + S3) 21
  • 22. Experimental Setup • Evaluation  On the second half of the Korean RDC corpus • The first half is for the baseline  On true entity mentions with true chaining of coreference  Evaluated by Precision/Recall/F-measure 22
  • 23. Experimental Results no assessment with assessment Model P R F P R F baseline 60.5 20.4 30.5 - - - baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2 Baseline + projection 51.4 15.5 23.8 56.1 22.9 32.5 (heuristics) Baseline + projection 55.3 19.4 28.7 59.8 26.7 36.9 (heuristics + dictionary) 23
  • 24. Non-filtered Projects were Poor no assessment with assessment Model P R F P R F baseline 60.5 20.4 30.5 - - - baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2 Baseline + projection 51.4 15.5 23.8 56.1 22.9 32.5 (heuristics) Baseline + projection 55.3 19.4 28.7 59.8 26.7 36.9 (heuristics + dictionary) 24
  • 25. Heuristics Were Helpful no assessment with assessment Model P R F P R F baseline 60.5 20.4 30.5 - - - baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2 Baseline + projection 51.4 15.5 23.8 56.1 22.9 32.5 (heuristics) Baseline + projection 55.3 19.4 28.7 59.8 26.7 36.9 (heuristics + dictionary) 25
  • 26. Much Worse Than Baseline no assessment with assessment Model P R F P R F baseline 60.5 20.4 30.5 - - - baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2 Baseline + projection 51.4 15.5 23.8 56.1 22.9 32.5 (heuristics) Baseline + projection 55.3 19.4 28.7 59.8 26.7 36.9 (heuristics + dictionary) 26
  • 27. Dictionary Was Also Helpful no assessment with assessment Model P R F P R F baseline 60.5 20.4 30.5 - - - baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2 Baseline + projection 51.4 15.5 23.8 56.1 22.9 32.5 (heuristics) Baseline + projection 55.3 19.4 28.7 59.8 26.7 36.9 (heuristics + dictionary) 27
  • 28. Still Worse Than Baseline no assessment with assessment Model P R F P R F baseline 60.5 20.4 30.5 - - - baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2 Baseline + projection 51.4 15.5 23.8 56.1 22.9 32.5 (heuristics) Baseline + projection 55.3 19.4 28.7 59.8 26.7 36.9 (heuristics + dictionary) 28
  • 29. Assessment Boosted Performance no assessment with assessment Model P R F P R F baseline 60.5 20.4 30.5 - - - baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2 Baseline + projection 51.4 15.5 23.8 56.1 22.9 32.5 (heuristics) Baseline + projection 55.3 19.4 28.7 59.8 26.7 36.9 (heuristics + dictionary) 29
  • 30. Combined Strategies Achieved Better Performance Then Baseline no assessment with assessment Model P R F P R F baseline 60.5 20.4 30.5 - - - baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2 Baseline + projection 51.4 15.5 23.8 56.1 22.9 32.5 (heuristics) Baseline + projection 55.3 19.4 28.7 59.8 26.7 36.9 (heuristics + dictionary) 30
  • 31. Contents • Introduction • Methods  Cross-lingual Annotation Projection for Relation Detection  Noise Reduction Strategies • Evaluation • Conclusion 31
  • 32. Conclusion • Summary  A cross-lingual annotation projection for relation detection  Three strategies for noise reduction  Projected instances from an English-Korean parallel corpus helped to improve the performance of the task • with the noise reduction strategies • Future work  A cross-lingual annotation projection for relation categorization  More elaborate strategies for noise reduction to improve the projection performance for relation extraction 32
  • 33. Q&A