Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
A Cross-Lingual Annotation Projection Approach for Relation Detection
1. A CROSS-LINGUAL ANNOTATION PROJECTION
APPROACH FOR RELATION DETECTION
The 23rd International Conference on Computational Linguistics (COLING 2010)
August 24th, 2010, Beijing
Seokhwan Kim (POSTECH)
Minwoo Jeong (Saarland University)
Jonghoon Lee (POSTECH)
Gary Geunbae Lee (POSTECH)
4. What’s Relation Detection?
• Relation Extraction
To identify semantic relations between a pair of entities
ACE RDC
• Relation Detection (RD)
• Relation Categorization (RC)
Owner-Of
Jan Mullins, owner of Computer Recycler Incorporated said that …
4
5. What’s the Problem?
• Many supervised machine learning approaches have been
successfully applied to the RDC task
(Kambhatla, 2004; Zhou et al., 2005; Zelenko et al., 2003; Culotta
and Sorensen, 2004; Bunescu and Mooney, 2005; Zhang et al.,
2006)
• Datasets for relation detection
Labeled corpora for supervised learning
Available for only a few languages
• English, Chinese, Arabic
No resources for other languages
• Korean
5
7. Cross-lingual Annotation Projection
• Goal
To learn the relation detector without significant annotation efforts
• Method
To leverage parallel corpora to project the relation annotation on
the source language LS to the target language LT
7
8. Cross-lingual Annotation Projection
• Previous Work
Part-of-speech tagging (Yarowsky and Ngai, 2001)
Named-entity tagging (Yarowsky et al., 2001)
Verb classification (Merlo et al., 2002)
Dependency parsing (Hwa et al., 2005)
Mention detection (Zitouni and Florian, 2008)
Semantic role labeling (Pado and Lapata, 2009)
• To the best of our knowledge, no work has reported on the
RDC task
8
9. Overall Architecture
Annotation Parallel
Projection
Corpus
Sentences in Sentences in
Ls Lt
Preprocessing Preprocessing
(POS Tagging, (POS Tagging,
Parsing) Parsing)
NER Word Alignment
Relation Detection Projection
Annotated Annotated
Sentences in Sentences in
Ls Lt 9
10. How to Reduce Noise?
• Error Accumulation
Numerous errors can be generated and accumulated through a
procedure of annotation projection
• Preprocessing for LS and LT
• NER for LS
• Relation Detection for LS
• Word Alignment between LS and LT
• Noise Reduction
A key factor to improve the performance of annotation projection
10
11. How to Reduce Noise?
• Noise Reduction Strategies (1)
Alignment Filtering
• Based on Heuristics
A projection for an entity mention should be based on alignments between
contiguous word sequences
accepted rejected
11
12. How to Reduce Noise?
• Noise Reduction Strategies (1)
Alignment Filtering
• Based on Heuristics
A projection for an entity mention should be based on alignments between
contiguous word sequences
Both an entity mention in LS and its projection in LT should include at
least one base noun phrase
N N N N
accepted rejected accepted rejected
N
12
13. How to Reduce Noise?
• Noise Reduction Strategies (1)
Alignment Filtering
• Based on Heuristics
A projection for an entity mention should be based on alignments between
contiguous word sequences
Both an entity mention in LS and its projection in LT should include at
least one base noun phrase
The projected instance in LT should satisfy the clausal agreement with the
original instance in LS
N N N N
accepted rejected accepted rejected rejected
N
13
14. How to Reduce Noise?
• Noise Reduction Strategies (2)
Alignment Correction
• Based on a bilingual dictionary for entity mentions
Each entry of the dictionary is a pair of entity mention in LS and its
translation or transliteration in LT
FOR each entity ES in LS A B C D E F G
RETRIEVE counterpart ET from DICT(E-T)
SEEK ET from the sentence ST in LT
IF matched THEN BCD - βγ
MAKE new alignment ES-ET
ENDIF
ENDFOR α β γ δ ε δ ε
corrected
14
15. How to Reduce Noise?
• Noise Reduction Strategies (3)
Assessment-based Instance Selection
• Based on the reliability of a projected instances in LT
Evaluated by the confidence score of monolingual relation detection for
the original counterpart instance in LS
Only instances with larger scores than threshold value θ are accepted
conf = 0.9 conf = 0.6
θ = 0.7
accepted rejected
15
17. Experimental Setup
• Dataset
English-Korean parallel corpus
• 454,315 bi-sentence pairs in English and Korean
• Aligned by GIZA++
Korean RDC corpus
• Annotated following LDC guideline for ACE RDC corpus
• 100 news documents in Korean
835 sentences
3,331 entity mentions
8,354 relation instances
17
18. Experimental Setup
• Preprocessors
English
• Stanford Parser (Klein and Manning, 2003)
• Stanford Named Entity Recognizer (Finkel et al., 2005)
Korean
• Korean POS Tagger (Lee et al., 2002)
• MST Parser (R. McDonald et al., 2006)
18
19. Experimental Setup
• Relation Detection for English Sentences
Tree kernel-based SVM classifier
• Training Dataset
ACE 2003 corpus
• 674 documents
• 9,683 relation instances
• Model
Shortest path enclosed subtrees kernel (Zhang et al., 2006)
• Implementation
SVM-Light (Joachims, 1998)
Tree Kernel Tools (Moschitti, 2006)
19
20. Experimental Setup
• Relation Detection for Korean Sentences
Tree kernel-based SVM classifier
• Training Dataset
Half of the Korean RDC corpus (baseline)
Projected instances
• Model
Shortest path dependency kernel (Bunescu and Mooney, 2005)
• Implementation
SVM-Light (Joachims, 1998)
Tree Kernel Tools (Moschitti, 2006)
20
21. Experimental Setup
• Experimental Sets
Combinations of noise reduction strategies
• (S1: Heuristic, S2: Dictionary, S3: Assessment)
1. Baseline
Trained with only half of the Korean RDC corpus
2. Baseline + Projections (no noise reduction)
3. Baseline + Projections (S1)
4. Baseline + Projections (S1 + S2)
5. Baseline + Projections (S3)
6. Baseline + Projections (S1 + S3)
7. Baseline + Projections (S1 + S2 + S3)
21
22. Experimental Setup
• Evaluation
On the second half of the Korean RDC corpus
• The first half is for the baseline
On true entity mentions with true chaining of coreference
Evaluated by Precision/Recall/F-measure
22
23. Experimental Results
no assessment with assessment
Model
P R F P R F
baseline 60.5 20.4 30.5 - - -
baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2
Baseline + projection
51.4 15.5 23.8 56.1 22.9 32.5
(heuristics)
Baseline + projection
55.3 19.4 28.7 59.8 26.7 36.9
(heuristics + dictionary)
23
24. Non-filtered Projects were Poor
no assessment with assessment
Model
P R F P R F
baseline 60.5 20.4 30.5 - - -
baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2
Baseline + projection
51.4 15.5 23.8 56.1 22.9 32.5
(heuristics)
Baseline + projection
55.3 19.4 28.7 59.8 26.7 36.9
(heuristics + dictionary)
24
25. Heuristics Were Helpful
no assessment with assessment
Model
P R F P R F
baseline 60.5 20.4 30.5 - - -
baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2
Baseline + projection
51.4 15.5 23.8 56.1 22.9 32.5
(heuristics)
Baseline + projection
55.3 19.4 28.7 59.8 26.7 36.9
(heuristics + dictionary)
25
26. Much Worse Than Baseline
no assessment with assessment
Model
P R F P R F
baseline 60.5 20.4 30.5 - - -
baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2
Baseline + projection
51.4 15.5 23.8 56.1 22.9 32.5
(heuristics)
Baseline + projection
55.3 19.4 28.7 59.8 26.7 36.9
(heuristics + dictionary)
26
27. Dictionary Was Also Helpful
no assessment with assessment
Model
P R F P R F
baseline 60.5 20.4 30.5 - - -
baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2
Baseline + projection
51.4 15.5 23.8 56.1 22.9 32.5
(heuristics)
Baseline + projection
55.3 19.4 28.7 59.8 26.7 36.9
(heuristics + dictionary)
27
28. Still Worse Than Baseline
no assessment with assessment
Model
P R F P R F
baseline 60.5 20.4 30.5 - - -
baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2
Baseline + projection
51.4 15.5 23.8 56.1 22.9 32.5
(heuristics)
Baseline + projection
55.3 19.4 28.7 59.8 26.7 36.9
(heuristics + dictionary)
28
29. Assessment Boosted Performance
no assessment with assessment
Model
P R F P R F
baseline 60.5 20.4 30.5 - - -
baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2
Baseline + projection
51.4 15.5 23.8 56.1 22.9 32.5
(heuristics)
Baseline + projection
55.3 19.4 28.7 59.8 26.7 36.9
(heuristics + dictionary)
29
30. Combined Strategies Achieved
Better Performance Then Baseline
no assessment with assessment
Model
P R F P R F
baseline 60.5 20.4 30.5 - - -
baseline + projection 22.5 6.5 10.0 29.1 13.2 18.2
Baseline + projection
51.4 15.5 23.8 56.1 22.9 32.5
(heuristics)
Baseline + projection
55.3 19.4 28.7 59.8 26.7 36.9
(heuristics + dictionary)
30
32. Conclusion
• Summary
A cross-lingual annotation projection for relation detection
Three strategies for noise reduction
Projected instances from an English-Korean parallel corpus helped
to improve the performance of the task
• with the noise reduction strategies
• Future work
A cross-lingual annotation projection for relation categorization
More elaborate strategies for noise reduction to improve the
projection performance for relation extraction
32