SlideShare une entreprise Scribd logo
1  sur  319
Télécharger pour lire hors ligne
DISCOURSE SEMANTICS OF S-MODIFYING ADVERBIALS

Katherine M. Forbes

A DISSERTATION
in
Linguistics

Presented to the Faculties of the University of Pennsylvania in Partial
Fulfillment of the Requirements for the Degree of Doctor of Philosophy

2003

Bonnie Webber, Supervisor of Dissertation

Ellen Prince, Supervisor of Dissertation

Donald A. Ringe, Graduate Group Chair

Aravind Joshi, Committee Member

Robin Clark, Committee Member
Acknowledgements
I wish to thank Bonnie Webber. Without her patience and her seemingly endless depths of insight,
I might never have completed this thesis. I am enormously grateful for her guidance.
I also owe many thanks to Ellen Prince. She is an intellectual leader at Penn who has helped
many, including me, find a way through the jungle of discourse analysis.
I am indebted to every professor who has taught me. Special thanks to Robin Clark for being a
member of my dissertation committee.
I am very lucky to have worked with Aravind Joshi. He is a continual source of knowledge in the
DLTAG meetings. The field of computational linguistics has already benefited from his sentencelevel work; I fully expect he and Bonnie will produce similarly useful results with DLTAG.
Also in DLTAG, Eleni Miltsakaki and Rashmi Prasad, and later Cassandre Creswell and Jason
Teeple all provided stimulation and solace. Their great company and great effort on DLTAG projects
taught me to appreciate how much can be done when minds work together. I look forward to the
chance to work with them in the future.
I am also thankful to Martha Palmer, Paul Kingsbury, and Scott Cotton for allowing me to work
with them on the Propbank project and supplement both my income and my work in discourse.
On a personal note, the Forbes, Finley, and Riley families deserve thanks for giving me love and
diversion and balance and talking me through my education. Most of all, thanks to Enrico Riley, for
being everything to me.

ii
ABSTRACT
DISCOURSE SEMANTICS OF S-MODIFYING ADVERBIALS
Katherine M. Forbes
Supervisors: Bonnie Webber and Ellen Prince

In this thesis, we address the question of why certain S-modifying adverbials are only interpretable
with respect to the discourse or spatio-temporal context, and not just their own matrix clause. It is
not possible to list these adverbials because the set of adverbials is compositional and therefore infinite. Instead, we investigate the mechanisms underlying their interpretation. We present a corpusbased analysis of the predicate argument structure and interpretation of over 13,000 S-modifying
adverbials. We use prior research on discourse deixis and clause-level predicates to study the semantics of the arguments of S-modifying adverbials and the syntactic constituents from which they
can be derived. We show that many S-modifying adverbials contain semantic arguments that may
not be syntactically overt, but whose interpretation nevertheless requires an abstract object from the
discourse or spatio-temporal context. Prior work has investigated only a small subset of these discourse connectives; at the clause-level their semantics has been largely ignored and at the discourse
level they are usually treated as “signals” of predefined lists of abstract discourse relations. Our investigation sheds light on the space of relations imparted by a much wider variety of adverbials. We
further show how their predicate argument structure and interpretation can be formalized and incorporated into a rich intermediate model of discourse that alone among other models views discourse
connectives as predicates whose syntax and semantics must be specified and recoverable to interpret
discourse. It is not only due to their argument structure and interpretation that adverbials have been
treated as discourse connectives, however. Our corpus contains adverbials whose semantics alone
does not cause them to be interpreted with respect to abstract object interpretations in the discourse
or spatio-temporal context. We explore other explanations for why these adverbials evoke discourse
context for their interpretation; in particular, we show how the interaction of prosody with the interpretation of S-modifying adverbials can contribute to discourse coherence, and we also show how
S-modifying adverbials can be used to convey implicatures.
iii
Contents
Acknowledgements

ii

Abstract

iii

Contents

iv

List of Tables

x

List of Figures

xiv

1 Introduction

1

1.1

The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Contributions of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.3

Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

2 Anaphora and Discourse Models

6

2.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

2.2

Descriptive Theories of Discourse Coherence . . . . . . . . . . . . . . . . . . . .

8

2.2.1

An Early Encompassing Description . . . . . . . . . . . . . . . . . . . . .

8

2.2.2

Alternative Descriptions of Propositional Relations . . . . . . . . . . . . .

10

2.2.3

Discourse Relations as Constraints . . . . . . . . . . . . . . . . . . . . . .

12

2.2.4

Abducing Discourse Relations by Applying the Constraints . . . . . . . .

14

2.2.5

Interaction of Discourse Inference and VP Ellipsis . . . . . . . . . . . . .

17

iv
2.2.6

19

Coherence within Discourse Segments . . . . . . . . . . . . . . . . . . . .

22

Modeling Linguistic Structure and Attentional State as a Tree . . . . . . .

23

Introduction to Discourse Deictic Reference . . . . . . . . . . . . . . . . .

25

2.3.5

Retrieving Antecedents of Discourse Deixis from the Tree . . . . . . . . .

27

2.3.6

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

A Tree Structure with a Syntax-Semantic Interface . . . . . . . . . . . . . . . . .

33

2.4.1

Constituents and Tree Construction . . . . . . . . . . . . . . . . . . . . .

33

2.4.2

The Syntax Semantic Interface . . . . . . . . . . . . . . . . . . . . . . . .

34

2.4.3

Retrieving Antecedents of Anaphora from the Tree . . . . . . . . . . . . .

36

2.4.4

The Need For Upward Percolation . . . . . . . . . . . . . . . . . . . . . .

36

2.4.5

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

A Descriptive Theory of Discourse Structure . . . . . . . . . . . . . . . . . . . . .

38

2.5.1

Analyzing Text Structure . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

2.5.2

The Need for Multiple Levels of Discourse Structure . . . . . . . . . . . .

42

2.5.3

“Elaboration” as Reference . . . . . . . . . . . . . . . . . . . . . . . . . .

44

2.5.4

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

A Semantic Theory of Discourse Coherence . . . . . . . . . . . . . . . . . . . . .

48

2.6.1

Abstract Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

2.6.2

A Formal Language for Discourse . . . . . . . . . . . . . . . . . . . . . .

53

2.6.3

Retrieving Antecedents of Anaphora from the Discourse Structure . . . . .

57

2.6.4

A System for Inferring Discourse Relations . . . . . . . . . . . . . . . . .

57

2.6.5

Extending the Theory to Cognitive States . . . . . . . . . . . . . . . . . .

62

2.6.6
2.7

The Three Tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.3.4

2.6

19

2.3.3

2.5

A Three-Tiered Model of Discourse . . . . . . . . . . . . . . . . . . . . . . . . .

2.3.2

2.4

18

2.3.1

2.3

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

2.7.1

64

Proliferation of Discourse Relations . . . . . . . . . . . . . . . . . . . . .

v
2.7.2

66

2.7.3

Structural and Anaphoric Cue Phrases . . . . . . . . . . . . . . . . . . . .

69

2.7.4

Comparison of DLTAG and Other Models . . . . . . . . . . . . . . . . . .

71

2.7.5
2.8

Use of Linguistic Cues as Signals . . . . . . . . . . . . . . . . . . . . . .

Remaining Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77

3 Semantic Mechanisms in Adverbials

78

3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

3.2

Linguistic Background and Data Collection . . . . . . . . . . . . . . . . . . . . .

79

3.2.1

Function of Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

3.2.2

Structure of PP and ADVP . . . . . . . . . . . . . . . . . . . . . . . . . .

81

3.2.3

Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

3.2.4

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

Adverbial Modification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

3.3.1

Clause-Level Analyses of Modification Type . . . . . . . . . . . . . . . .

87

3.3.2

Problems with Categorical Approaches . . . . . . . . . . . . . . . . . . .

91

3.3.3

Modification Types as Semantic Features . . . . . . . . . . . . . . . . . .

92

3.3.4

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

Adverbial Semantic Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

3.4.1

(Optional) Arguments or Adjuncts? . . . . . . . . . . . . . . . . . . . . .

94

3.4.2

External Argument Attachment Ambiguity . . . . . . . . . . . . . . . . .

98

3.4.3

Semantic Representation of External Argument . . . . . . . . . . . . . . . 103

3.4.4

Semantic Arguments as Abstract Objects . . . . . . . . . . . . . . . . . . 104

3.4.5

Number of Abstract Objects . . . . . . . . . . . . . . . . . . . . . . . . . 107

3.4.6

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

3.3

3.4

3.5

S-Modifying PP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.5.1

Proper Nouns, Possessives, and Pronouns . . . . . . . . . . . . . . . . . . 111

3.5.2

Demonstrative and Definite Determiners . . . . . . . . . . . . . . . . . . . 114

3.5.3

Indefinite Articles, Generic and Plural Nouns, and Optional Arguments . . 117
vi
3.5.4
3.5.5

Other Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

3.5.6
3.6

PP and ADJP Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

S-Modifying ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
3.6.1
3.6.2

Context-Dependent ADVP Adverbials . . . . . . . . . . . . . . . . . . . . 139

3.6.3

Comparative ADVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

3.6.4

Sets and Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

3.6.5
3.7

Syntactically Optional Arguments . . . . . . . . . . . . . . . . . . . . . . 135

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

4 Incorporating Adverbial Semantics into DLTAG

157

4.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

4.2

Syntax-Semantic Interfaces at the Sentence Level . . . . . . . . . . . . . . . . . . 158
4.2.1
4.2.2

LTAG: Lexicalized Tree Adjoining Grammar . . . . . . . . . . . . . . . . 159

4.2.3

A Syntax-Semantic Interface for LTAG Derivation Trees . . . . . . . . . . 161

4.2.4

A Syntax-Semantic Interface for LTAG Elementary Trees . . . . . . . . . 166

4.2.5

Comparison of Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 169

4.2.6
4.3

The Role of the Syntax-Semantic Interface . . . . . . . . . . . . . . . . . 158

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

Syntax-Semantic Interfaces at the Discourse Level . . . . . . . . . . . . . . . . . . 171
4.3.1
4.3.2

Syntax-Semantic Interfaces for Derived Trees . . . . . . . . . . . . . . . . 179

4.3.3

A Syntax-Semantic Interface for DLTAG Derivation Trees . . . . . . . . . 190

4.3.4

Comparison of Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 218

4.3.5
4.4

DLTAG: Lexicalized Tree Adjoining Grammar for Discourse . . . . . . . . 171

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

DLTAG Annotation Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
4.4.1

Overview of Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

4.4.2

Preliminary Study 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
vii
4.4.3
4.4.4
4.5

Preliminary Study 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228

5 Other Ways Adverbials Contribute to Discourse Coherence

229

5.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

5.2

Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5.2.1
5.2.2

Information-Structure and Theories of Structured Meanings . . . . . . . . 232

5.2.3

Alternative Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

5.2.4

Backgrounds or Alternatives? . . . . . . . . . . . . . . . . . . . . . . . . 237

5.2.5

Contrastive Themes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

5.2.6
5.3

The Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240

Focus Sensitivity of Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
5.3.1
5.3.2

Other Focus Sensitive Sub-Clausal Modifiers . . . . . . . . . . . . . . . . 244

5.3.3

S-Modifying “Focus Particles” . . . . . . . . . . . . . . . . . . . . . . . . 249

5.3.4

Focus Sensivity of S-Modifying Adverbials . . . . . . . . . . . . . . . . . 254

5.3.5

Focusing S-Modifying Adverbials to Evoke Context . . . . . . . . . . . . 258

5.3.6
5.4

Focus Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

Implicatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
5.4.1
5.4.2

Pragmatic and Semantic Presupposition . . . . . . . . . . . . . . . . . . . 267

5.4.3
5.5

Gricean Implicature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

Using S-Modifying Adverbials to Convey Implicatures . . . . . . . . . . . . . . . 270
5.5.1

Presupposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

5.5.2

Conversational Implicatures . . . . . . . . . . . . . . . . . . . . . . . . . 271

5.5.3

Interaction of Focus and Implicature . . . . . . . . . . . . . . . . . . . . . 277

5.5.4

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
viii
5.6

Other Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
5.6.1
5.6.2

5.7

Discourse Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Performatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278

6 Conclusion

279

6.1

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

6.2

Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

Bibliography

285

ix
List of Tables
2.1

Main Categories of [HH76]’s Relations between Propositions . . . . . . . . . . . .

10

2.2

Main Categories of [Lon83]’s Relations between Propositions . . . . . . . . . . .

10

2.3

Main Categories of [Mar92]’s Relations between Propositions . . . . . . . . . . .

11

2.4

Main Categories of [Hob90]’s Relations between Propositions . . . . . . . . . . .

12

2.5

[Keh95]’s Cause-Effect Relations . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

2.6

[Keh95]’s Resemblance Relations . . . . . . . . . . . . . . . . . . . . . . . . . .

14

2.7

[GS86] Changes in Discourse Structure Indicated by Linguistic Expressions . . . .

21

2.8

Centering Theory Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

2.9

[Web91]’s Classification of Discourse Deictic Reference . . . . . . . . . . . . . .

26

2.10 Organizations of RST Relation Definitions

. . . . . . . . . . . . . . . . . . . . .

39

2.11 Evidence: RST Relation Definition . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.12 Volitional-Cause: RST Relation Definition . . . . . . . . . . . . . . . . . . . . . .

42

2.13 Elaboration: RST Relation Definition . . . . . . . . . . . . . . . . . . . . . . . .

44

2.14 [Ven67]’s Imperfect and Perfect Nominalizations . . . . . . . . . . . . . . . . . .

49

2.15 [Ven67]’s Loose and Narrow Containers . . . . . . . . . . . . . . . . . . . . . . .

50

2.16 DICE: discourse relation definitions . . . . . . . . . . . . . . . . . . . . . . . . .

58

2.17 DICE: Indefeasible axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

2.18 DICE: Defeasible laws on world knowledge . . . . . . . . . . . . . . . . . . . . .

59

2.19 DICE: Defeasible laws on discourse processes . . . . . . . . . . . . . . . . . . . .

60

2.20 DICE: Deduction rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

2.21 [Kno96]’s Features of Discourse Connectives . . . . . . . . . . . . . . . . . . . .

68

x
3.1

Non-Derived and Derived Adverbs . . . . . . . . . . . . . . . . . . . . . . . . . .

82

3.2

tgrep Results for S-Adjoined ADVP and PP in WSJ and Brown Corpora . . . . . .

85

3.3

Total S-Adjoined Adverbials in WSJ and Brown Corpora . . . . . . . . . . . . . .

86

3.4

[Ale97]’s Modification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.5

[Ern84]’s Modification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

3.6

[KP02]’s Modification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

3.7

[Gre69]’s Syntactic Tests for Distinguishing VP and S Modification . . . . . . . .

99

3.8

Semantic Interpretations of [Ern84]’s Modification Types . . . . . . . . . . . . . . 103

3.9

Abstract Object Interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

3.10 Approximate Counts of Tokens and Types of some Internal PP arguments . . . . . 111
3.11 PP Adverbials with Proper Noun or Year Internal Argument . . . . . . . . . . . . 111
3.12 PP Adverbial with Possessive Proper Noun Internal Argument . . . . . . . . . . . 112
3.13 PP Adverbials with Pronominal Internal Argument . . . . . . . . . . . . . . . . . 113
3.14 PP Adverbial with Possessive Pronoun . . . . . . . . . . . . . . . . . . . . . . . . 114
3.15 Approximate Counts of Tokens and Types of some Internal PP arguments . . . . . 114
3.16 PP Adverbials with Definite Concrete Object Internal Argument . . . . . . . . . . 115
3.17 PP Adverbials with Definite AO Internal Argument . . . . . . . . . . . . . . . . . 115
3.18 PP Adverbials with Demonstrative Concrete Object Internal Argument . . . . . . . 116
3.19 PP Adverbials with Demonstrative AO Internal Arguments . . . . . . . . . . . . . 117
3.20 Approximate Counts of Tokens and Types of some Internal PP arguments . . . . . 118
3.21 PP Adverbial with Indefinite Concrete Object Internal Argument . . . . . . . . . . 118
3.22 PP Adverbial with Indefinite AO Internal Argument . . . . . . . . . . . . . . . . . 118
3.23 PP Adverbial with Relational Indefinite AO Internal Argument . . . . . . . . . . . 119
3.24 PP Adverbials with Generic or Plural Concrete Object Internal Arguments . . . . . 122
3.25 PP Adverbials with Generic or Plural AO Internal Arguments . . . . . . . . . . . . 122
3.26 PP Adverbials with Relational Generic AO Internal Arguments . . . . . . . . . . . 123
3.27 Approximate Counts of Tokens and Types of some Internal Argument Modifiers . . 124
3.28 Binary Definite Internal Argument with Overt Argument . . . . . . . . . . . . . . 125

xi
3.29 Binary Indefinite Internal Argument with Overt Argument . . . . . . . . . . . . . 125
3.30 Binary Generic or Plural Internal Argument with Overt Argument . . . . . . . . . 126
3.31 Internal Argument with a Spatio-Temporal ADJ . . . . . . . . . . . . . . . . . . . 126
3.32 Internal Argument with Referential Adjective . . . . . . . . . . . . . . . . . . . . 127
3.33 Internal Argument with Non-Referential Adjective . . . . . . . . . . . . . . . . . 128
3.34 Internal Argument with Determiner and Non-Referential Adjective . . . . . . . . . 128
3.35 Internal Argument with Ordinal Adjective . . . . . . . . . . . . . . . . . . . . . . 129
3.36 Internal Argument with Alternative Phrase . . . . . . . . . . . . . . . . . . . . . . 129
3.37 Internal Argument with Determiner and Alternative Phrase . . . . . . . . . . . . . 129
3.38 Internal Argument with Comparative/Superlative Adjective . . . . . . . . . . . . . 130
3.39 Internal Argument with Other Set-Evoking Adjectives . . . . . . . . . . . . . . . . 131
3.40 Approximate Counts of Tokens and Types of some Internal PP arguments . . . . . 131
3.41 PP Adverbial with Reduced Clause Internal Argument . . . . . . . . . . . . . . . 132
3.42 PP Adverbial Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.43 Approximate Counts of Tokens and Types of some ADVP Adverbials . . . . . . . 135
3.44 Mis-Tagged PP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
3.45 PP-like ADVP Adverbials with Overt Arguments . . . . . . . . . . . . . . . . . . 136
3.46 PP-like ADVP Adverbials with Hidden Argument . . . . . . . . . . . . . . . . . . 136
3.47 Relational ADJP with Overt Argument . . . . . . . . . . . . . . . . . . . . . . . . 137
3.48 Relational ADVP Adverbials with Hidden Argument . . . . . . . . . . . . . . . . 138
3.49 Approximate Counts of Tokens and Types of some ADVP Adverbials . . . . . . . 139
3.50 ADVP Adverbial Conjunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
3.51 Mis-Tagged PP Adverbial Constructions . . . . . . . . . . . . . . . . . . . . . . . 142
3.52 Spatio-Temporal ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . 143
3.53 Another Spatio-Temporal ADVP Adverbial . . . . . . . . . . . . . . . . . . . . . 143
3.54 Other Spatio-Temporal ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . 143
3.55 Spatio-Temporal Manner ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . 144
3.56 Deictic ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

xii
3.57 Deictic-Derived ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . 145
3.58 Approximate Counts of Tokens and Types of some ADVP Adverbials . . . . . . . 146
3.59 Comparative Adverb Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
3.60 Comparative ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
3.61 Specified Comparative ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . 147
3.62 Comparative-Derived ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . 148
3.63 Comparative Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
3.64 Approximate Counts of Tokens and Types of some ADVP Adverbials . . . . . . . 150
3.65 Ordinal ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
3.66 Ordinal -ly ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
3.67 Frequency ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
3.68 Epistemic ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
3.69 Domain ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
3.70 Non-Specific Set-Evoking ADVP Adverbials . . . . . . . . . . . . . . . . . . . . 153
3.71 Multiply-Featured ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . 153
3.72 More Multiply-Featured ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . 153
3.73 Evaluative or Agent-Oriented ADVP Adverbials . . . . . . . . . . . . . . . . . . . 154
3.74 ADVP Adverbial Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

 

4.1

Nine Connectives Studied in [CFM 02] . . . . . . . . . . . . . . . . . . . . . . . 223

4.2

Annotation Tags for the Nine Connectives Studied in [CFM 02] . . . . . . . . . . 224

4.3

LOC Tag Values

4.4

Inter-Annotator Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

5.1

ADVP/PP Adverbials with Focus Particle Modifier . . . . . . . . . . . . . . . . . 260

5.2

Higher-Ordered Epistemic Adverbials Yielding Implicatures . . . . . . . . . . . . 271

5.3

Lower-Ordered Epistemic Adverbials Yielding Implicatures . . . . . . . . . . . . 273

5.4

Lower-Ordered Quantificational Adverbials Yielding Implicatures . . . . . . . . . 274

5.5

Higher-Ordered Quantificational Adverbials Yielding Implicatures . . . . . . . . . 274

 

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

xiii
List of Figures
2.1

[HH76]’s Types of Cohesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.2

Illustration of [GS86]’s Discourse Model . . . . . . . . . . . . . . . . . . . . . . .

20

2.3

Illustration of [Web91]’s Attachment Operation . . . . . . . . . . . . . . . . . . .

24

2.4

Illustration of [Web91]’s Adjunction Operation . . . . . . . . . . . . . . . . . . .

24

2.5

LDM Right-Attachment Operation . . . . . . . . . . . . . . . . . . . . . . . . . .

34

2.6

LDM Tree Structure for Example (2.58) . . . . . . . . . . . . . . . . . . . . . . .

36

2.7

LDM Tree Structure for Example (2.59) . . . . . . . . . . . . . . . . . . . . . . .

37

2.8

RST Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.9

Evidence Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

2.10 RST Condition and Motivation Relations . . . . . . . . . . . . . . . . . . . . . .

43

2.11 [KOOM01]’s Discourse Model . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

2.12 [Ash93]’s Classification of Abstract Objects . . . . . . . . . . . . . . . . . . . . .

52

2.13 Sample DRSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

54

2.14 Sample SDRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.15 Elementary DLTAG Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

3.1

S-Adjoining PP and ADVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

3.2

S-Adjoined Discourse and Clausal Adverbials . . . . . . . . . . . . . . . . . . . .

84

3.3

S-Adjoined ADVP and PP Adverbials in Penn Treebank I . . . . . . . . . . . . . .

84

3.4

[Ash93]’s Classification of Abstract Objects . . . . . . . . . . . . . . . . . . . . . 105

3.5

Syntactic Structure of S-Modifying PP Adverbials . . . . . . . . . . . . . . . . . . 110

xiv
3.6

Syntactic Structure of S-Modifying ADVP Adverbials . . . . . . . . . . . . . . . . 134

4.1

Elementary LTAG Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

4.2

LTAG Derived Tree after Substitions . . . . . . . . . . . . . . . . . . . . . . . . . 160

4.3

LTAG Derived Tree After Adjunction . . . . . . . . . . . . . . . . . . . . . . . . 161

4.4

LTAG Derivation Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

4.5

Semantic Representations of

4.6

Semantic Representations of John walks . . . . . . . . . . . . . . . . . . . . . . . 163

4.7

Semantic Representations of

4.8

Semantic Representations of John often walks Fido . . . . . . . . . . . . . . . . . 164

4.9

Simplified Semantic Representation of

)

 % #   
('¥$¡ ! ¡ ©¨§¥£¡
 ¦¤ ¢
and

, and

32 0
94!

,

)

 ¦¤ ¢
©¨§¥£¡

,

. . . . . . . . . . . . . . . . . . . 163

32 0
541


¡

and

. . . . . . . . . . . 164

. . . . . . . . . . . . . . 165

 ¦¤ ¢
8¨76$¡

4.10 The Elementary Tree for slide . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

F ECA
G9DB@

4.11 The Syntax-Semantic Interface for

. . . . . . . . . . . . . . . . . . . . . . . 168

4.12 DLTAG Initial Trees for Subordinating Conjunctions . . . . . . . . . . . . . . . . 172

H

4.13 DLTAG Auxiliary Tree for and and

. . . . . . . . . . . . . . . . . . . . . . . . . 173

4.14 DLTAG Auxiliary Trees for Discourse Adverbials . . . . . . . . . . . . . . . . . . 174
4.15 DLTAG Initial Tree for Adverbial Constructions . . . . . . . . . . . . . . . . . . . 177
4.16 DLTAG Derived Tree for Example (4.18) . . . . . . . . . . . . . . . . . . . . . . 178
4.17 DLTAG Derivation Tree for Example (4.18) . . . . . . . . . . . . . . . . . . . . . 178
4.18 Illustration of [Web91]’s Attachment and Adjunction Operations . . . . . . . . . . 179
4.19 Webber’s Adjunction at a Leaf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
4.20 Derived Tree for Example (2.41) . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
4.21 Substitution in FTAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4.22 Adjunction in FTAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4.23 LDM Elementary DCU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.24 DTAG Elementary DCU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.25 LDM List Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
4.26 DTAG R Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
4.27 [Gar97b]’s -Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

E

xv
4.28 [Gar97b]’s -Adjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

E

4.29 First DTAG Derivation of Example (4.21) . . . . . . . . . . . . . . . . . . . . . . 186
4.30 Second DTAG Derivation of Example (4.21)

. . . . . . . . . . . . . . . . . . . . 186

4.31 Step One in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . 187
4.32 Step Two in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . 187
4.33 Step Three in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . 188
4.34 Step Four in the Second DTAG Derivation of Example (4.21)

. . . . . . . . . . . 188

4.35 Step Five in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . 189
4.36 Step Six in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . 189
4.37 Step Seven in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . 190
4.38 DLTAG Elementary Trees for Example (4.22) . . . . . . . . . . . . . . . . . . . . 191
,

and

 ¦¤ ¢
8¨§¥$¡

4.39 Semantic Representation of

. . . . . . . . . . . . . . . 192

3 ¨ ¦ 3 S¤Q3I
!T8%$¡ 9(RP$¡

4.40 DLTAG Derived and Derivation Trees and Semantic Representation for (4.22) . . . 192
4.41 DLTAG Elementary Trees for Example (4.24) . . . . . . . . . . . . . . . . . . . . 193
,

and

% ¨
54V$¡


33
9 ¡ U(¤

)

4.42 Semantic Representation of

. . . . . . . . . . . . . . . . . . 193

4.43 DLTAG Derived and Derivation Trees and Semantic Representation for (4.24) . . . 194
4.44 DLTAG Derived and Derivation Trees and Semantic Representation for (4.26) . . . 194
4.45 DLTAG Derived and Derivation Trees and Semantic Representation for (4.28) . . . 195
4.46 LTAG Derived and Derivation Trees for Example (4.30) . . . . . . . . . . . . . . . 196
4.47 Quantifiers in French . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
4.48 [Kal02]’s -Edges for Quantifiers in French . . . . . . . . . . . . . . . . . . . . . 197

F

4.49 [Kal02]’s -Derivation Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

F

4.50 DLTAG Derived Tree and -Derivation Graph for Example (4.28) . . . . . . . . . 199

F

4.51 Additional Semantic Representation for (4.28) due to -Derivation Graph . . . . . 199

F

4.52 DLTAG Derived Tree and -Derivation Graph for Example (4.32) . . . . . . . . . 200

F

4.53 Flexible Composition in LTAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
4.54 DLTAG Derived and Derivation Trees and Semantic Representation for (4.28) . . . 203
4.55 DLTAG Elementary Trees for Example (4.33) . . . . . . . . . . . . . . . . . . . . 204

xvi
,

X
Y)

)

,

,

and

¢3% a `32 
5'(!94T% ¡

% 
W¤
§6¢ ¡ '42

,

. . . . . 204

`3 %  Q
5('RbR$¡

)


T%

4.56 Semantic Representation of

4.57 DLTAG Derived and Derivation Trees, -Derivation Graph and Semantics for (4.33) 205

F

4.58 DLTAG Elementary Trees for Example (4.35) . . . . . . . . . . . . . . . . . . . . 206
,

 %
T0 ¡

X
e)

,

and

3 ¦
918f¡

)

dW49B55URQ
¦2 3 Sc3 

4.59 Semantic Representation of

. . . . . . . . . . . . 207

4.60 DLTAG Derived and Derivation Trees, -Derivation Graph, and Semantics for (4.35) 207

F

4.61 DLTAG Elementary Trees for Example (4.37) . . . . . . . . . . . . . . . . . . . . 208
,

,

X
Y)

,

,

 %
T0 ¡

2 ¤ 2 0  h ¦ S 3 `
§4£¡ 1Yi82gR!1£¡ ¤

)

,

and

3 ¦
5!©$¡

)


7¤

4.62 Semantic Representation of

. . . . . 208

4.63 DLTAG Derived and Derivation Trees, -Derivation Graph and Semantics for (4.37) 209

F

d2g91$¡
¦ S3`

4.64 DLTAG Elementary Tree and Semantic Representation for

in (4.39) . . . . 210

4.65 DLTAG Derived and Derivation Trees, -Derivation Graph and Semantics for (4.39) 210

F

4.66 DLTAG Derivation Tree and -Derivation Graph for Example (4.41) . . . . . . . . 212
,

,

)

,

¨`¤
1§PI

p W`3a
($¡ !9§53 ¡ !9§53
W`3a

)

F

4.67 Elementary LTAG Trees and Semantic Representations of

214

4.68 Elementary DLTAG Trees for Example for example . . . . . . . . . . . . . . . . . 215
4.69 Derivation Trees for PP Discourse Adverbials with Quantified Internal Arguments . 216
4.70 DLTAG Derived and Derivation Trees for (4.32) . . . . . . . . . . . . . . . . . . . 218
4.71 Another Representation of the R Tree in Figure 4.26
5.1

. . . . . . . . . . . . . . . . 219

Gricean Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

xvii
Chapter 1

Introduction
1.1

The Problem

Traditionally in linguistic theory, syntax and semantics provide mechanisms to build the interpretation of a sentence from its parts; although it is non-controversial a sequence of sentences such as
found in (1.1) - (1.3) also has an interpretation, the mechanisms which produce it are not defined.
(1.1) There is a high degree of stress level from the need to compete and succeed in this ‘me
generation’. As a result, people have become more self-centered over time.
(1.2) John has finally been rewarded for his great talent. Specifically, he just won a gold medal
for mogul-skiing in the Olympics.
(1.3) The company interviewed everyone who applied for the position. In this way, they considered all their options.
Most discourse theories go beyond sentence level linguistic theory to explain how such sequences are put together to create a discourse interpretation. These theories evoke the notion
of abstract discourse relations between discourse units, provide lists of these relations of varying length and organization, and propose discourse models constructed from these relations and
units. Some of these models produce compositional accounts of discourse structure and/or interpretation ([Pol96, Ash93, MT88, GS86]; others produce accounts for how relations between units
are inferred([Keh95, HSAM93, LA93]. The majority make use of the presence of cue phrases, or
1
discourse connectives, treating them as “signals” of the presence of particular discourse relations.
In (1.1), for example, the relevant cue phrase is the adverbial as a result, and the discourse relation it
signals is frequently classified as a result relation. Along with certain adverbials, the subordinating
and coordinating conjunctions are also classified as discourse connectives in these theories.

 

 

DLTAG [FMP 01, CFM 02, WJSK03, WKJ99, WJSK99, WJ98] is a theory that bridges the
gap between clause-level and discourse-level theories, providing a model of a rich intermediate level
between clause structure and high-level discourse structure, namely, the syntax and semantics associated with discourse connectives. In DLTAG, discourse connectives are predicates, akin to verbs
at the clause level, except that they take discourse units as arguments. DLTAG proposes to build the
interpretation of these predicates directly on top of the clause, using the same syntactic and semantic mechanisms that are already used to build the clause. Based on considerations of computational
economy and behavioral evidence, DLTAG argues that both arguments of subordinating and coordinating conjunctions can be represented structurally, but only one argument of adverbial discourse
connectives comes structurally; the other argument must be resolved anaphorically. However, while
DLTAG has shown that certain adverbials function as discourse connectives, it has not isolated the
subset of adverbials which function as discourse connectives from the set of all adverbials.
The set of all adverbials is a large set; in fact, it is compositional and therefore infinite[Kno96].
Because it is thus not possible to list all of the adverbials that function as discourse connectives,
in this thesis we investigate how semantics and pragmatics cause an adverbial to function as a
discourse connective.

1.2

Contributions of the Thesis

This thesis extends the DLTAG model, investigating the semantics and pragmatics underlying the
behavioral anaphoricity of adverbial discourse connectives. We present a corpus-based analysis of
over 13,000 S-modifying adverb (ADVP) and preposition (PP) adverbials in the Penn Treebank Corpus [PT]. We show that certain adverbials, which we call discourse adverbials, can be distinguished
semantically from other adverbials, which we call clausal adverbials. Some clausal adverbials from
our corpus are shown in (1.4), and some discourse adverbials from our corpus are shown in (1.5).
2
(1.4) Probably/In my city/In truth, women take care of the household finances.
(1.5) As a result/Specifically/In this way, women take care of the household finances.
The most frequently occurring clausal and discourse adverbials have both been classified in the
literature as discourse connectives, due to the fact that they seem to be interpretable only with respect to context. In this thesis we will show that while syntax cannot distinguish these two types
of adverbials, their predicate argument structure and interpretation shows that only discourse adverbials function semantically as discourse connectives.
The syntax and semantics of most discourse adverbials has not been well studied. Generally,
only a small subset (those that occur frequently) have been addressed at all. At the clause level these
are usually designated as the domain of discourse level research, and at the discourse level the focus
is frequently on the discourse relation they “signal”. Our investigation sheds light on the space of
relations imparted by a much wider variety of adverbials.
In our analysis we draw on clause-level research into the semantics of adverbials and other
sub-clausal constituents. We use prior research on discourse deixis to study both the semantic
nature of the arguments of adverbials and the syntactic constituents from which they can be derived.
We present a wide variety of discourse and clausal adverbials. We show that discourse adverbials
function semantically as discourse connectives because they contain semantic arguments that may or
may not be syntactically overt, but whose interpretation requires an abstract object interpretation of
a contextual constituent. We show that clausal adverbials do not function semantically as discourse
connectives because the interpretations of their semantic arguments do not require the abstract object
interpretation of a contextual constituent, although they may make anaphoric reference to other
contextual interpretations. We further show how the predicate argument structure and interpretation
of discourse adverbials can be formalized and incorporated into the syntax of the DLTAG model.
It is not only due to their predicate argument structure and interpretation that adverbials have
been classified as discourse connectives, however. We encounter in our corpus a number of adverbials that have been treated as discourse connectives despite the fact that their semantics does not
require abstract object interpretations in the discourse or spatio-temporal context. We explore other
explanations for how these adverbials evoke discourse context during their interpretation; in partic-

3
ular, we investigate the interaction of their semantics with other semantic and pragmatic devices.
We show how focus effects in S-modifying adverbials contribute to discourse coherence, and we
also show how S-modifying adverbials can be used to convey Gricean implicatures.
While the semantics and pragmatics discussed here will not provide a complete account of the
discourse functions of all adverbials, it will show that the analysis of adverbials can be viewed
modularly: certain functions can be attributed to the semantic domain, others to the pragmatic
domain, and still others to larger issues of discourse structure.
There are numerous benefits of this analysis. First, it is economical, making use of pre-existing
clause-level mechanisms to build adverbial semantics at the discourse level, thereby reducing the
load on inference to account for discourse interpretation (c.f. [Keh95]). Secondly, it provides a theoretical grounding for [Kno96]’s empirical approach to studying the lexical semantics of discourse
connectives, in the process showing that additional adverbials should be included in the class he
isolates based on intuition alone, and that some of those included there don’t really belong. Thirdly,
it expands an existing model of discourse which argues that discourse structure can be built directly
on top of clause structure and thereby bridges the gap between high-level discourse theory and
clause-level theory.

1.3

Thesis Outline

In this chapter, we have given a brief overview of the analyses that we present in the remainder of
this thesis. The rest of this thesis is organized as follows:
In Chapter 2 we survey a variety of existing discourse theories and examine the similarities
and differences between each theory. We discuss how, taken together, each theory serves to distinguish different modules required to build a complete interpretation of discourse. We then overview
DLTAG as another important module that alone out of all the others is capable of bridging the gap
between discourse level theories and clause level theories, by treating discourse connectives as predicates and using the same syntax and semantics that builds the clause to build an intermediate level
of discourse.
In Chapter 3 we investigate the semantic mechanisms that cause some adverbials to function
4
as discourse connectives. We discuss prior research into the semantics of adverbials and present
an analysis of the S-modifying adverbials in the Penn Treebank corpus that distinguishes those
adverbials that function as discourse connectives according to their predicate-argument structure
and interpretation.
In Chapter 4 we show how the semantics of adverbials discussed in Chapter 3 can be incorporated into a syntax semantic interface for DLTAG. We discuss syntax-semantic interfaces that have
been proposed for clause-level grammars and related discourse grammars and show how these interfaces can be extended to DLTAG. We further discuss the DLTAG annotation project whose goal
is to annotate the arguments of all discourse connectives, both structural and anaphoric.
In Chapter 5 we continue our analysis of how adverbials function as discourse connectives.
investigating other ways apart from their predicate argument structure and argument resolution in
which an adverbial can be used to contribute to discourse coherence.
We conclude in Chapter 6 and discuss directions for future work.

5
Chapter 2

Anaphora and Discourse Models
2.1

Introduction

Discourse models explain how sequences of utterances are put together to create a text. Building
a coherent discourse involves more than just concatenating random utterances; in addition, the
contributions of each utterance to the surrounding context must be established. Two major areas
of investigation have been distinguished. The first concerns how sub-clausal constituents obtain
their meaning through relationships to entities previously evoked in a discourse. Such constituents
include NPs, such as in (2.1) where the personal pronoun he refers to an entity mentioned in the
prior sentence, (2.2) where the beer refers to one of the elements of the picnic in the prior sentence,
and (2.3) where the demonstrative pronoun that refers to the interpretation of the prior sentence.
(2.1) Bill talked to Phillip. He got really upset.
(2.2) Bill and Mary took a picnic to the park. The beer was warm.
(2.3) Bill talked to Phillip. That made me mad.
Other examples include VPs, such as in (2.4) where the elided VP ( ) must be determined from

q

the meaning of the prior sentence, and in (2.5) where the use of simple past tense in both sentences
creates an impression of forward progression in time.

q

(2.4) Bill talked to Phillip. I did

too.

(2.5) Bill entered the room. He began to talk.
6
The second major area of investigation concerns how clausal (and super-clausal) constituents
obtain their meaning through relationships to clausal constituents in the surrounding context. To
illustrate the nature of these investigations, consider the discourse in (2.6).
(2.6 a) Last summer, the Keatings traveled in Zimbabwe.
(2.6 b) Pat studied flora in the Chimanimani mountains.
In the absence of any additional context, one reader might interpret (2.6), and/or the writer’s
intention in producing (2.6), as a description of what the Keatings on the one hand, and Pat on
the other, did the prior summer. Another reader might interpret it as contrasting what the two
participants did the prior summer, e.g. the Keatings (just) traveled, whereas Pat studied.
Interactions between these two areas of investigation have also been studied. For example,
suppose (2.6) is preceded and followed by other sentences, as in (2.7).
(2.7 a) Pat Keating married Maria Lopez last spring.
(2.7 b) Last summer, the Keatings traveled in Zimbabwe.
(2.7 c) Pat studied flora in the Chimanimani mountains.
(2.7 d) That was a spectacular celebration.
Due to addition of (a), the reader will likely determine that Pat is a member of the Keating
family. S/he might thus interpret Pat’s studying as an elaboration of, or even as a cause of, the
Keatings’ traveling, or s/he might simply interpret Pat’s studying as occurring after the Keatings’
traveling. World knowledge or inference may yield the belief that the Chimanimani mountains are
located in Zimbabwe. Note that the demonstrative reference in (d) is hard to resolve to the marriage
described in (a) unless we move it to a position immediately following (a) in the discourse.
A complete model of discourse must account for all of these relationships, and their interactions.
In particular, a discourse model must characterize:

r

the properties of the constituents that are being related

r

the type of relationships that can exist between these constituents

r

the mechanisms underlying these relationships
7
r

the constraints on the application of these mechanisms
In the following sections we will survey a variety of existing discourse models in terms of

their coverage of the above characterizations. By taking a roughly chronological approach, and
examining the benefits and limitations of each subsequent model in terms of how it incorporates
those prior to it, these characterizations will be fleshed out, and it will be shown that, taken together,
each theory serves to distinguish different modules required to build a complete interpretation of
discourse. We then introduce DLTAG as an important module capable of bridging the gap between
discourse level theories and clause level theories.

2.2
2.2.1

Descriptive Theories of Discourse Coherence
An Early Encompassing Description

[HH76] early proposed that a single underlying factor, which they call cohesion, unifies sequences
of sentences to create a discourse. Cohesion is defined as the “semantic relations between successive
linguistic devices in a text, whereby the interpretation of one presupposes the interpretation of the
other in the sense that it cannot be effectively decoded except by recourse to it”([HH76] p.4)1 . They
distinguish five classes of cohesion, shown in Figure 2.1.

Figure 2.1: [HH76]’s Types of Cohesion
Reference is a semantic relation achieved by the use of a cataphoric or anaphoric reference item
to signal that the appropriate instantial meaning be supplied. Personal reference (signaled by per1

This use of the term “presupposition” is not equivalent to semantic presupposition; the latter depends on truth valuation and the former does not. Both [HH76] and [Sil76] define “discourse”, or “pragmatic”, presupposition as the
relationship of a linguistic form to its prior context; Silverstein adds that a pragmatic presupposition is what a language
user must know about the context of use of a linguistic signal in order to interpret it [Sil76, 1]. See Chapter 5 for further
discussion of presupposition.

8
sonal pronouns and determiners, e.g. I, my), demonstrative reference (signaled by demonstratives,
e.g. this, that), and comparative reference (signaled by certain nominal modifiers, e.g. same, and
verbal adjuncts, e.g. identically) are distinguished, and exemplified in italics in (2.8).
(2.8) John saw a black cat, but that doesn’t mean it was the same black cat he saw before.
Lexical cohesion is a semantic relation achieved by the successive use of vocabulary items
referring to the same entity or event, including definite descriptions, repetitions, synonyms, superordinates, general nouns, and collocation. Every lexical item can be lexically cohesive; this function
is established by reference to the text. In [HH76]’s example, shown in (2.9), there are definite
descriptions: a pie...the pie, repetitions: pie...pie, general nouns and synonyms: a pie...a dainty dish
and super-ordinates: blackbirds...birds.
Sing a song of sixpence, a pocket full of rye,
(2.9)

Four-and-twenty blackbirds baked in a pie,
When the pie was opened, the bird began to sing,
Wasn’t that a dainty dish to set before a king?

Substitution and Ellipsis are grammatical relations, which can be nominal, verbal, or clausal.
The substitute must be of the same grammatical class as the item for which it substitutes, and ellipsis
is substitution by zero([HH76, 89]). In (2.10), nominal one is a substitute, and there is ellipsis of
the embedded predicate in the final clause.
(2.10) Mary covets two things. Her money will be the first one to leave her. Her husband will
be the next 0.
Conjunction is a semantic relation usually achieved by the use of conjunctive elements, whose
meaning presupposes the presence of other propositions in the discourse and specifies the way they
connect to the proposition that follows. Italicized examples are shown in (2.11). [HH76] distinguish four main types of relations between propositions, shown in Table 2.1. These relations are
further subdivided, and an orthogonal distinction is made between external and internal relations;
the former hold between elements in the world (referred to in the text), and the latter between text
elements themselves, such as speech acts.
9
(2.11) Because it snowed heavily, the battle was not fought, so the soldiers went home.
Table 2.1: Main Categories of [HH76]’s Relations between Propositions
ADDITIVE
complex
apposition
comparison

2.2.2

ADVERSATIVE
contrastive
correction
dismissal

CAUSAL
specific
conditional
respective

TEMPORAL
sequential
simultaneous
conclusive
correlative

Alternative Descriptions of Propositional Relations

In comparison to [HH76], [Lon83]’s study of discourse coherence distinguishes between predications expressed by clauses, which he models with predicate calculus, and relations on the predications expressed by clauses, which he characterizes into two main types, shown in Table 2.2:
the “basic” operations of propositional calculus, supplemented by temporal relations, and a set of
elaborative relations. These relations are further subdivided, and an orthogonal distinction is made
between non-frustrated and frustrated relations, the latter being the case when an expected relation
is not satisfied by the assertions in the text. Unlike [HH76], [Lon83] does not emphasize a correlation between these relations and surface signals in the text; rather, they are meant to categorize the
“deep” relations underlying the surface structure of discourse.
Table 2.2: Main Categories of [Lon83]’s Relations between Propositions
BASIC
conjoining ( )
alternation ( )
implication ( )
temporal

ELABORATIVE
paraphrase
illustration
deixis
attribution

u

s

t

More recently, [Mar92] has proposed an alternative set of relations between propositions, shown
in Table 2.3, in which four main types are distinguished. These relations are further subdivided, and
orthogonal distinctions are made between internal and external relations, and paratactic, hypotactic,
and neutral relations. The first dimension is taken from [HH76], and the latter dimension roughly
10
corresponds to coordinating, subordinating, and variably coordinating and subordinating relations,
respectively. Like [HH76], Martin uses explicit signals to derive his set of discourse relations,
but like [Lon83], he defends the claim that they represent “deep” relations underlying the surface
structure. He combines the two approaches by using an insertion test: a “deep” relation exists at
a place in the text if an explicit signal can be inserted there. Nevertheless, his set is different from
both [HH76] and [Lon83].
Table 2.3: Main Categories of [Mar92]’s Relations between Propositions
ADDITIVE
addition
alternation

COMPARATIVE
similarity
contrast

TEMPORAL
simultaneous
successive

CONSEQUENTIAL
purpose
concession
condition
manner
consequence

[SSN93] take a psychological approach, identifying the basic cognitive resources underlying
the production of discourse relations. Four cognitive primitives are identified, according to which
discourse relations can be classified, which they exemplify using explicit cue phrases. [SSN93] cite
a number of psychological experiments to support these features.

r

basic operation: Each relation creates either an additive (and) or a causal (because) connection between the related constituents.

r

source of coherence: Each relation creates either semantic or pragmatic coherence; in the
first case the propositional content of the constituents is related, in the second case the illocutionary force of the constituents is related.

r

order of segments: Causal relations may have the causing segment to the left or the right of
the result.
polarity: A relation is negative if it links the content of one segment to the negation of the

r

content of the other segment (although), and positive otherwise.
[Hob90] takes a computational approach, identifying relations between propositions according
to the kind of inference that is required to identify them. The main categories are shown in Table 2.4.
11
Respectively, these categories distinguish inference about causality between events in the world,
inference about the speaker’s goals, inference about what the hearer already knows, and inference
that a hearer is expected to be able to make about relationships between objects and predicates in
the world. [Hob90] suggests that inference should be viewed as a recursive mechanism; when two
propositions are linked by a relation, they form a unit which itself can be related to other units,
thereby building an interpretation of the discourse as a whole.
Table 2.4: Main Categories of [Hob90]’s Relations between Propositions
Occasion
cause
enablement

2.2.3

Evaluation

Ground-Figure
background
explanation

Explanation
parallel
generalization
exemplification contrast

Discourse Relations as Constraints

[Keh95] reformulates [Hob90]’s relations between propositions into three main types of more general “discourse relations” : Contiguity, Cause-Effect, and Resemblance, which he defines in terms
of constraints on both clausal and sub-clausal properties of discourse units S and S . He then spec-

w

v

ifies how an inference mechanism can be used to derive Cause-Effect and Resemblance relations,
and shows how they interact with sub-clausal coherence. Like [HH76] and [Mar92], he correlates
these relations with the presence of cue phrases, suggesting that they could be treated as bearing
semantic features that interact with the discourse inference process.
Narration is the only Contiguity relation Kehler defines. Exemplified in (2.12), the constraint
on its derivation is that a change of state for a system of entities from S be inferred, where the

w

initial state for this system is inferred from S .

v

(2.12) Bill picked up the speech. He began to read.
Kehler notes that the full set of constraints governing the recognition of a Narration relation
are not well understood, but he refutes ([HH76, Lon83])’s treatments, which equate it with temporal
progression, citing [Hob90]’s example (2.13), whose interpretation requires the additional inference
that Bush is on the train, or that the train arrival is somehow relevant to him.
12
(2.13) At 5:00 a train arrived in Chicago. At 6:00 George Bush held a press conference.
Kehler distinguishes four types of Cause-Effect relations, all of which must satisfy the constraint that a presupposed path of implication be inferred between a proposition P from S , and a

v

proposition Q from S . Each type and the implication it requires is shown in Table 2.5, along with

w

correlated cue phrases.
Table 2.5: [Keh95]’s Cause-Effect Relations
Relation
Result
Explanation
Violated Expectation
Denial of Preventer

Presupposition
P
Q
Q P
P
Q
Q
P

Conjunctions
as a result, therefore, and
because
but
despite, even though

u

xyu
x
yu
u

To take two examples, a Result relation is inferred when Q is recognized as normally following
from P. In (2.14), being a politician normally implies being dishonest.
(2.14) Bill is a politician, and therefore he’s dishonest.
Denial of Preventer relations are inferred when P is recognized as normally following from

x

Q (example (2.15)).
(2.15) Bill is honest even though he’s a politician.
Kehler distinguishes six types of Resemblance relations, all having the constraint that a common or contrasting relation be inferred between S and S , such that subsumes p and p , where

w

€

v

w

€

v

p applies over a set of entities a ,...a from S , and p applies over a set of entities b ,...b from



w

w



v

w

v

S . Certain Resemblance relations also have the constraint that a property vector be inferred, such

w

consists of common or contrasting properties q , which hold for a and b , for all

C



%

%

%



that

2.

Table

2.6 provides the constraints for each Resemblance relation and its correlated cue phrase.
For example, Exemplification holds between a general statement followed by an example of
the generalization. In (2.16), a and b correspond to the meanings of young aspiring politicians

w

Kehler notes that Elaboration relations are a limiting case of Parallel relations, where the similar entities a and b
are identical.

‚

13

‚

w

2
; while p and p correspond to the meanings of support and campaign for respectively3 .

w

v

‡ †„
ˆU…ƒ

and

Generalization is identical to Exemplification, except that the order of the clauses is reversed.
(2.16) Young aspiring politicians often support their party’s presidential candidate. For instance,
John campaigned hard for Clinton in 1992.
Table 2.6: [Keh95]’s Resemblance Relations
Constraints
p =p ,a =b
= p , b a or b
a
= p , a b or a
b
= p , q (a ) and q (b )
= p , q (a ) and q (b )
= p , q (a ) and q (b )

% % x
% %
% %
%  %

…%
%

%

v
% % w
% % w x v
v
% % w
v
% ‰ % w
v
‰
…% w
v
%

w

p
p
p
p
p

%

Relation
Elaboration
Exemplification
Generalization
Parallel
Contrast (i)
Contrast (ii)

Conjunctions
in other words
for example
in general
and
but
but

Parallel relations require the relations expressed by the sentence and the corresponding entities
to be recognized as sharing a common property. In (2.17), p and p correspond to the meanings

w

€

v

of organized rallies for and distributed pamphlets for respectively;

corresponds to the meaning

of do something to support. a and b correspond to the meanings of John and Bill, which share

w

w

the common property q that they are people relevant to the conversation. Contrast relations re-

w

quire either the relations expressed by the sentences (example (2.18)) or the corresponding entities
(example (2.19)) to be recognized as contrasting.
(2.17) John organized rallies for Clinton, and Fred distributed pamphlets for him.
(2.18) John supported Clinton, but Mary opposed him.
(2.19) John supported Clinton, but Mary supported Bush.

2.2.4

Abducing Discourse Relations by Applying the Constraints

Kehler’s constraints are formulated in terms of two operations from artificial intelligence: (1) identifying common ancestors of sets of objects with respect to a semantic hierarchy (Resemblance

’

‘

’

14

is equatable with p , and p can be recognized as a

“

3
Although not discussed by Kehler, the subsuming property
member of p .
relations), and (2) computing implication relationships with respect to a knowledge base (CauseEffects relations). Kehler distinguishes two steps in the discourse inference process:
(a) Identify and retrieve the arguments to the discourse relation
This step is achieved via the sentence interpretation; Kehler uses a formalism related to the
version of Categorial Semantics described in [Per90], in which sentence interpretation results in a
syntactic structure annotated with the semantic representation of each constituent. These semantic
representations are arguments to the discourse relation, and are identified and retrieved via their corresponding syntactic nodes. Cause-Effect relations require only the identification of the sententiallevel semantics for the clauses as a whole (i.e. P and Q). Resemblance relations require that the
semantics of sub-clausal constituents be accessed, in order to identify p and p , and a and b .

%

%

w

v

(b) Apply the constraints of the relation to those arguments
The second step, Kehler suggests, could be achieved for Resemblance relations using comparison and generalization operations such as proposed in [Hob90] and elsewhere, while [HSAM93]’s
logical abduction interpretation method could be used to abduce the presupposition for the CauseEffect relations. [HSAM93]’s method could further determine with what degree of plausibility the
constraints are satisfied such that a particular relation holds.
In [HSAM93]’s framework, discourse relations between discourse units are proved (abduced)
using world and domain knowledge, via a procedure of axiom application. Each discourse unit is a
, as defined by axiom (2.20), where if is a sentence containing a string of words, , and

d

@

˜ ‡F • ”F
™—–@

is its assertion or topic, then it is a discourse segment.

f

F

(2.20) ( w, e)s(w, e)

Segment(w, e)

e

When a discourse relation holds between two segments, the resulting structure is also a segment,
yielding a hierarchical discourse structure, as captured by axiom (2.21), where if w and w are

g

w

segments whose assertion or topic are respectively e and e , and a discourse (coherence) relation

g

w

holds between the content of w and w , then the string w w is also a segment. The argument of

F

g w

g

w

CoherenceRel is the assertion or topic of the composed segment, as determined by the definition of
the discourse relation.
15
CoherenceRel(e , e , e)

g

w

Segment(w , e )

f

t g

(2.21) ( w , w , e , e , e) Segment(w , e )

g

t w

g

w

g

w

w

e

Segment(w w , e)

g w

To interpret a discourse W, therefore, one must prove the expression:
(2.22) ( e)Segment(W, e)

h

We use as an example a variant of that found in [Keh95]:
(2.23) John is dishonest. He’s a politician.
To interpret this discourse, it must be proven a segment, by establishing the three premises in
axiom (2.21). The first two premises are established by (2.20), it therefore remains to establish
a discourse relation. Because Explanation is a defined discourse relation, we have the following
axiom:
CoherenceRel(e , e , e )

g

w

w

(2.24) ( e , e )Explanation(e , e )

f g

g

w

w e

In explanations, Hobbs notes, it is the first segment that is explained; therefore it is the dominant
segment and its assertion, e , will be the assertion of the composed segment, i.e. the third argument

w

of CoherenceRel in (2.24).
P be

(e , e ) must be abduced, as expressed by

g

w

F@ k j
6GVi

presupposed; in Hobbs’ terms, the presupposition

u

Recall that the constraints defined by Kehler on Explanation relations were that Q

the following axiom:
Explanation(e , e )

g

w

( e , e )cause(e , e )

f g

g

w

w e

In other words, to abduce an Explanation relation, what is asserted by e must proven to be the

g

cause of e . In [HSAM93], utterances, like discourse relations, are interpreted by abducing their

w

logical form, using axioms that are already in the knowledge base, are derivable from axioms in
the knowledge base, or can be assumed at a cost corresponding to some measure of plausibility.
Assume we have abduced the following axiom:
cause(e , e )

w

t

( e )Dishonest(e , x)

g

g

g h

f

(2.25) ( x, e )Politician(e , x)

w

e

w

That is, if e is a state of x being a politician, then that will cause the state e of x being dishonest.

g

w

The plausibility measure that is assigned to this formula will be inversely proportionate to the cost
16
assigned to an Explanation relation. Assuming (2.25) has a high plausibility in our knowledge base,
then in the logical forms of the two sentences in (2.23), John (and he) can be identified with x,
cause(e , e ) proven thereby, and Explanation will be viewed as the likely relation between the two

g

w

sentences4 .

2.2.5

Interaction of Discourse Inference and VP Ellipsis

[Keh95] shows how the discourse inference process for Resemblance relations interacts differently
than the discourse inference process for Cause-Effect relations with VP ellipsis, based on the different constraints they require to be satisfied by the clauses they are inferred between. In particular,
the arguments to Resemblance relations are sets of parallel entities and relations. Therefore, the
discourse inference process must access sub-clausal constituents in identifying and retrieving those
arguments, including the missing constituent in VP ellipsis. In contrast, the arguments to CauseEffect relations are propositions. Therefore the inference process need not access sub-clausal constituents. This difference accounts for different felicity judgments concerning VP ellipsis displayed
across the two types of relations.
To exemplify his analysis, consider example (2.26), in which a Parallel relation can be inferred
between the two clauses:
(2.26) Bill became upset, and Hillary did too.
To establish a Parallel relation (see the Resemblance Relation definitions in Table 2.6), p(a ,a ...)

g w

must be inferred from S , and p(b ,b ...) must be inferred from S , where for some property vector

m

g w

l

, q (a ) and q (b ), for all . The identification of these arguments requires the elided material to be

C

% %

% %

E ‡
…–j



recovered

reconstructed in the elided VP (see [Keh95] for details of the process of reconstruc-

tion). Compare (2.26), however, with (2.27).
(2.27) *The problem was looked into by Billy, and Hillary did too.
Again, to establish a Parallel relation between the two clauses, the arguments must be identified,
requiring the elided material to be recovered and reconstructed in the elided VP. But in this case the
4

This explanation is from [Keh95, 18] and [Lag98]. See [HSAM93] for further details

17
recovery of was looked into creates a mismatch of syntactic form when it is reconstructed in the
elided VP, resulting in an infelicitous discourse.
Such infelicity does not occur, however, when there is a Cause-Effect relation between two
similar clauses, as in example (2.28):
(2.28) The problem was to have been looked into, but obviously nobody did.
Kehler argues that because establishing a Violation of Expectation relation (see the Cause-Effect
Relation definitions in Table 2.5) requires only that a proposition P be inferred from S , and a propo-

l

x
nu

sition Q be inferred from S (where normally P

Q), the elided VP need not be reconstructed

m

in the syntax, but can be recovered through anaphora resolution. The result is that the discourse is
felicitous5 .

2.2.6

Summary

In this section, we have seen the early delineation of different types of coherence proposed by Halliday and Hasan reflected in subsequent theories of discourse coherence, which we will see further
below. The comparison of the set of propositional relations proposed by Halliday and Hasan with
those proposed in other descriptive theories highlights the lack of agreement in the literature about
how an important aspect of discourse coherence should be described. As we will continue to see in
the following sections, though most models make use of explicit signals to characterize discourse
relations, there still exists considerable variation in the number and type of discourse relations each
model defines. What distinguishes each model is the degree and manner with which they associate
their postulated set of discourse relations to mechanisms that produce them and how they constraint
the application of these mechanisms. Kehler’s attempt to define relations between discourse units
in terms of constraints which those units must satisfy, to demonstrate how their satisfaction can be
determined using the logical abduction method of Hobbs et al., and to show how this satisfaction
interacts with sub-clausal coherence, is a first exemplification of such an association. We will see
others below, and in the final section we will see a way in which these various approaches can be
5

Kehler does not address the fact that (2.26) is infelicitous with a Cause-Effect relation, e.g. The problem was looked
into by Billy, but Hillary didn’t.

18
simplified.

2.3
2.3.1

A Three-Tiered Model of Discourse
The Three Tiers

[GS86] present a theory of discourse that distinguishes three interacting components: the linguistic
structure, the intentional structure, and the attentional state.
The linguistic structure represents the structure of sequences of utterances, i.e. the structure
of segments into which utterances aggregate. This structure is not strictly compositional, because
a segment may consist of embedded subsegments as well as utterances not in those subsegments.
This structure is viewed as akin to the syntactic structure of individual sentences ([GS86], footnote
1), although the boundaries of discourse segments are harder to distinguish6 .
The intentional structure represents the structure of purposes, (DSPs), i.e. the functions of
each discourse segment, whose fulfillment leads to the fulfillment of an overall discourse purpose
(DP). DPs and DSPs are distinguished from other intentions by the fact that they are intended to be
recognized. Non-DP/DSP intentions, such as a speaker’s intention to use certain words, or impress
or teach the hearer, are private, i.e. not intended to contribute to discourse interpretation. Examples
of DPs and DSPs include intending the hearer to perform some action, intending the hearer to
believe some fact, intending the hearer identify some object or property of an object. As these
examples imply, the set of intentions that can serve as DSPs and DPs is infinite, although it remains
an open question of whether there is a finite description of this set. However, [GS86] argue that there
are only two structural relations which can hold between DSPs and their corresponding discourse
segments. If the fulfillment of a DSP A provides partial fulfillment of a DSP B, then B dominates
A. If a DSP A must be fulfilled before a DSP B, then A satisfaction-precedes B. Because a hearer
cannot know the whole set of intentions that might serve as DSPs, what they recognize, [GS86]
argue, is the relevant structural relations between them.
The attentional state is viewed as a component of the cognitive state, which also includes the
6

See [GS86, FM02] for references to studies investigating these boundaries.

19
knowledge, beliefs, desires and private intentions of the speaker and hearers. The attentional state
is inherently dynamic, and is modeled by a stack of focus spaces, each consisting of the objects,
properties and relations that are salient in each DSP, as well as the DSP itself. Changes in the
attentional state arise through the recognition of the structural relations between DSPs. In general,
when the DSP for a new discourse segment contributes to the DSP for the immediately preceding
segment, it will be pushed onto the stack; when the new DSP contributes to some intention higher
in the dominance hierarchy, several focus spaces are popped from the stack before the new one
is pushed. One role of the stack is to constrain the possible DSPs considered as candidates for
structural relations with the incoming DSP; only DSPs in the stack and in one of the two structural
relations are available. Another role of the stack is to constrain the hearer’s search for possible
referents of referring expressions in an incoming utterance; the focus space containing the utterance
will provide the most salient referents. Figure 2.2 illustrates the major aspects of the model.

Figure 2.2: Illustration of [GS86]’s Discourse Model
In the left of the figure, a sequence of five utterances is divided into DSs, where DS1 includes
both DS2 and DS3, as well as Utterance1 and Utterance5, which are not included in either DS2 or
DS3. As shown in (a), the focus space FS1 containing DSP1 and the objects, properties and relations
so far identified in DS1 is pushed on the stack. Because DSP1 is identified as dominating DSP2,
FS2 is also pushed onto the stack. In (b), DSP2 is identified as being in a satisfaction-precedes
relationship with DSP3; FS2 is thus popped from the stack before FS3 is pushed onto the stack.
[GS86] argue that the hearer makes use of three pieces of information when determining the

20
segments, their DSPs, and the structural relationships between them. First, linguistic expressions,
including cue phrases and referring expressions as well as intonation and changes in tense and
aspect, are viewed as primary indicators of discourse structure, even as the attentional structure
constrains their interpretations. [GS86] argue that while linguistic expressions cannot indicate what
intention is entering into focus, they can provide partial information about changes in attentional
states, whether this change returns to a previous focus space or creates a new one, how the intention
in the containing discourse segment is related to other intentions, and structural relations between
segments. They exemplify such uses of linguistic expression as shown in Table 2.7.
Table 2.7: [GS86] Changes in Discourse Structure Indicated by Linguistic Expressions
Attentional Change

True Interruption
Flashbacks
Digressions
Satisfaction-precedes
New Dominance

(push)
(pop to)
(complete)

now, next, that reminds me, and, but
anyway, but anyway, in any case, now back to
the end, ok, fine, paragraph break
I must interrupt, excuse me
Oops, I forgot
By the way, incidentally, speaking of
Did you hear about..., that reminds me
in the first place, first, second, finally
moreover, furthermore
for example, to wit, first, second, and
moreover, furthermore, therefore, finally

Second, the hearer makes use of the utterance-level intentions of each utterance [Gri89] to determine the DSP of each discourse segment. The DSP may be identical to some utterance-level
intention in a segment, as in a rhetorical question, whose intention is to cause the hearer to believe
the proposition conveyed in the question. Alternatively, the DSP may be some combination of the
utterance-level intentions, as in a set of instructions, where the intention of the speaker is that all of
them be completed.
Third, shared knowledge between the speaker and hearer about the objects and actions in the
stack can help determine the structural relations between utterances and the intentions underlying them. [GS86] propose two relationships concerning objects and actions that a hearer uses. A
supports relation holding between propositions may indicate dominance in one direction, while

21
a generates relation holding between propositions may indicate dominance in another direction.
They leave as an open question how these relations between objects are computed, but view them as
more basic versions of the possible relations between propositions proposed by [HH76] and others.
Together, this information enable a hearer to reason out the DSPs and DP in a discourse.

2.3.2

Coherence within Discourse Segments

Within each discourse segment, Centering Theory (CT) [WJP81] is a model of sub-clausal discourse
coherence which tracks to the movement of entities through each focus state by one of four possible
focus shifts. In CT, each discourse segment consists of utterances designated as U . Each utterance

%

in U is the backward-looking center, Cb. The highest-ranked

E F sCA j F
–5rqpo

) that is

%

˜F
@
w D%
h

%

entity in Cf(U

of discourse entities, the forward-looking centers, Cb(U ). The highest-ranked

%

U evokes a

entity in Cf(U ) is the preferred center, Cp. The realize relation is defined in [WJP81] as follows:

%

is an element of the situation described by U, or

i

if

i

i

As utterance U realizes a center

is the

semantic interpretation of some subpart of U.
Ranking of the members of the Cf list is language-specific; in English the ranking is as follows:

t

Indirect Object

Direct Object

Other

t

t

Subject

Four types of transitions are defined to reflect variations in the degree of topic continuity and
are computed according to Table 2.8:
Table 2.8: Centering Theory Transitions

uv %

)

Cb(U ) Cb(U
Smooth-Shift
Rough-Shift

w D%
h

w D%
h

%

Cb(U ) = Cp(U )
Cb(U ) Cp(U )

Cb(U ) = Cb(U
Continue
Retain

)

%

%

%

uv %

Discourse coherence is then computed according to the following transition ordering rule: Continue is preferred to Retain, which is preferred to Smooth-Shift, which is preferred to Rough Shift.
CT models discourse processing factors that explain the difference in the perceived coherence
of discourses such as (2.29) and (2.30).

22
(2.29a) Jeff helped Dick wash the car.
(2.29b) He washed the windows as Dick waxed the car.
(2.29c) He soaped a pane.
(2.30a) Jeff helped Dick wash the car.
(2.30b) He washed the windows as Dick waxed the car.
(2.30c) He buffed the hood.
CT predicts that (2.30) is harder to process than (2.29), because though initially in both discourses the entity realized by Jeff is established as the Cb, utterance (2.30c) causes a Smooth-Shift
in which the Cb becomes the entity realized by Dick, because the verb buffing is a subset of the waxing event. The predicted preference for a Continue (which actually occurs in (2.29c)) means that the

2.3.3

w D%
h

hearer first interprets the pronouns he in (2.30c) as the Cp(U

) and then revises this interpretation.

Modeling Linguistic Structure and Attentional State as a Tree

[Web91] argues that a tree structure and insertion algorithm can serve as a formal analogue of both
on-line recognition of discourse structure and changes in attention state, thereby removing the need
to postulate a separate stack for focus spaces, while retaining the distinction between text structure,
intentional structure, and attentional state.
Webber’s model assumes a one-to-one mapping between discourse segments and tree nodes,
with a clause constituting the minimal unit. In this way the linguistic structure is represented compositionally. Each node in the tree is associated with the entities, properties and relations conveyed
by the discourse segment it represents. When the information in a new clause C is to be incorporated
into an existing discourse segment DS, C is incorporated into the tree by the operation of attachment, which adds the C node as a child of the DS node, and adds the information conveyed by C to
the DS node. This operation is illustrated in Figure 2.3. (a) shows the tree before node 3 is attached,
while (b) shows the tree after node 3 is attached. Note that the information associated with node 3
is represented in node 3 and incorporated into the discourse segment (1,2,3) it has attached to.
When the information in a new clause C is combined with the information in an existing discourse segment to compose a new discourse segment DS, C is incorporated into the tree by the
23
Figure 2.3: Illustration of [Web91]’s Attachment Operation

operation of adjunction, which makes C and DS the children of a new node, and adds the information conveyed by C and DS to the new node. This operation is illustrated in Figure 2.4. (a) shows
the tree before node 3 is adjoined, while (b) shows the tree after node 3 is adjoined. Note that the
information associated with node 3 is incorporated along with the information associated with node
(1,2) (which was also created by adjunction) into the new node (1,2),3).

Figure 2.4: Illustration of [Web91]’s Adjunction Operation

Both of these operations are restricted to applying to nodes on the right frontier of the discourse
tree. Formally, the right frontier is the smallest set of nodes containing the root such that whenever
a node is in the right frontier, so is its rightmost child. In this way, the tree nodes appear in the same
linear order as the corresponding segments in the text.
In Webber’s model, the tree replaces [GS86]’s linguistic structure, and the right frontier replaces [GS86]’s attentional state, i.e. the information in each node on the right frontier represents
the information in each focus space in the stack. Because the model is strictly compositional, not

24
all nodes (discourse segments) in the tree will contain discourse segment purposes (DSPs) (e.g. Utterance1 and Utterance5 in Figure 2.2); however, all nodes on the right frontier except possibly the
leaf will contain DSPs that contribute to the DP of the overall discourse (which will be contained in
the root of the tree.)

2.3.4

Introduction to Discourse Deictic Reference

[Lak74] first used the term“discourse deixis” to refer to uses of the demonstrative like those in (2.31)
- (2.33), where the antecedent of the demonstrative can be the interpretation of a verbal predicates
(2.31), the interpretation of a clause (2.32), or the interpretations of more than one clause (2.33).
(2.31) John [smiled]. He does that often.
(2.32) [John took Biology 101.] That means he can take Biology 102.
(2.33) [I woke up and brushed my teeth. I went downstairs and ate breakfast, and then I went to
work.] That’s all I did today.
Early studies of this phenomena relate it to another use of demonstratives shown in (2.34), where
the antecedent is not in the discourse at all, but rather in the spatio-temporal situation. This use is
called “deictic”, a Greek term meaning ‘pointing’ or ‘indicating’.
(2.34) “Aw, that’s nice, Billy!”, you exclaim, when your two-year old kisses you.
In [Lyo77]’s view, discourse deixis achieves higher-order reference, where first-order reference
is defined as reference to NPs, and higher-order reference is defined as reference to larger constituents interpreted as events, propositions and concepts. [Web91] distinguishes five discourse
deixis interpretations, shown in Table 2.9, and exemplified in the second column, where for illustrative purposes the discourse deictic should be assumed to refer to an interpretation of the clause
“John talks loudly”.
Demonstratives are most commonly employed in English for discourse deixis purposes. Corpus
studies, however, have shown the zero-pronoun used in Italian [DiE89] and German [Eck98], and
occasionally in English speech.
[Sch85] studies roughly 2000 tokens of it and that, and finds that it is much less frequently

25
Table 2.9: [Web91]’s Classification of Discourse Deictic Reference
Interpretation
hline speech act
proposition
event
pure textual
description

Example
that’s a lie
that’s true
that happened yesterday
repeat that
that’s a good description

used than that as a discourse deictic, and that when uses of discourse deictic it do occur, they are
frequently used after a discourse deictic use of that, in what Schiffman calls a “Pronoun Chain”. A
similar observation is made by [Web88]. [GHZ93] note more generally the tendency for it to prefer
reference to focused items, while demonstrative pronouns prefer reference to activated items. For
example, in (2.35), Both uses of it refer to “becoming a street person”; by the second reference,
this property is focused. that prefers referring to “becoming a street person would hurt his mother”,
which is not yet focused, and is highly dispreferred as the referent for the second it.
(2.35) John thought about becoming a street person. It would hurt his mother and it/that would
make his father furious.
The oft-cited example in (2.36) shows what [GC00] and [Byr00] relatedly claim, that personal
pronouns tend to refer to entities denoted by noun phrases, while demonstratives tend to refer discourse deictically. In (2.36), the referent of it is clearly “x”, while the referent of that is clearly the
result of “add x to y”.
(2.36) Add x to y and then add it/that to z.
The preference of it to refer to entities denoted by noun phrases and to refer to abstract objects
only after they are referred to by a demonstrative suggests that nouns are more salient than verbs and
clauses as entities. [Byr00] however, notes that the salience effects on personal pronoun resolution
can be affected by what she calls “Semantic Enhancement”: with enough predicate information
geared toward a higher order referent, personal pronouns can made to prefer higher order referents,

26
as shown in (2.38c).
(2.37) There was a snake on my desk.
(2.38a) It scared me.
(2.38b) That scared me.
(2.38c) I never thought it would happen to me. (Sem. Enh)
[Eck98] notes a further difference between the resolution of demonstratives and personal pronouns as discourse deixis, which may indicate that topics are more salient than verbs and clauses
as entities. In (2.39), that prefers reference to the specific story described by Speaker A, while it
prefers reference to the topic of child-care in general7 . In fact, [ES99] does not consider this use of
it a discourse deictic use at all; they treat it as a “vague pronoun”.
Speaker A:

She has a private baby-sitter. And, uh, the baby just screams. I
mean, the baby is like seventeen months and she just screams.

(2.39)

Even if she knows that they’re getting ready to go out. They
haven’t even left yet...
Speaker B:

Yeah, it/that’s hard.

[Lad66] and others note subtle salience differences between the discourse deictic uses of this
and that, related to their spatio-temporal differences: this is used when the referent is close, and that
is used when the referent is far.

2.3.5

Retrieving Antecedents of Discourse Deixis from the Tree

Many researchers find that discourse deictic reference is dependent on discourse structure. [Pas91]
uses (2.40) to show that the clausal referent of a discourse deictic is only available if it immediately
precedes the deictic. In (d), that cannot refer to sentence (a) unless (b) and (c) are removed.
(2.40a) Carol insists on sewing her dresses form all natural materials
(2.40b) and she won’t even consider synthetic lining.
(2.40c) She should try the new rayon challis.
(2.40d) *That’s because she’s allergic to synthetics.
7

[GC00] also claim that prosody plays a role in resolving discourse deictic that more than it.

27
[Web91] argues more formally that though deictic reference is often ambiguous (or underspecified [Pas91]), the referent is restricted to the right frontier of the growing discourse tree. She
exemplifies this using (2.41)-(2.42):
(2.41a) It’s always been presumed that
(2.41b) when the glaciers receded
(2.41c) the area got very hot.
(2.41d) The Folsum men couldn’t adapt, and
(2.41e) they died out.
(2.42) That’s what’s supposed to have happened. It’s the textbook dogma. But it’s wrong.
The discourse deictic reference in (2.42) is ambiguous; it can refer to any of the nodes on the
right frontier of the discourse: (the nodes associated with) clause (2.41e), clauses (2.41d)-(2.41e),
clauses (2.41c)-(2.41e), clauses (2.41a)-(2.41e).
Discourse deictic ambiguity extends to within the clause as well [Sch85], [Sto94]. For example,
in (2.43), the referent of that could be any of the bracketed elements:
(2.43 a) [ It talks about [ how to [ go about [ interviewing ]]]
(2.43b) and that’s going to be important.
As noted by [DH95], the standard view on anaphoric processing is that we “pick up” the interpretation of the antecedent, and that in the normal case, there is a coreference relation between
the antecedent and the anaphor. The coreference relation is one of identity, and the antecedent is
“there”, waiting to be “picked up”. Thus, in (2.44), my grandfather is said to be coreferent to he:
(2.44) My grandfather was not a religious person. He even claimed there was no god.
However, the fact that the interpretations of discourse deixis are not grammaticalized as nouns
prior to discourse deictic reference, and the fact that there are structural restrictions on their reference, leads some researchers to argue that they are not present as entities in the discourse model
prior to discourse deictic reference. According to these researchers, their entity reading is coerced
and added to the discourse model via discourse deictic reference.

28
Type coercion is a term taken from computer science, where it defines an operation by which an
expression which is normally of one logical type is re-interpreted as another (e.g. when an integer is
understood as a Boolean value). Type coercion is used to explain a range of linguistic phenomena,
such as when an expression which is indeterminate as to logical type is ’coerced’ into one particular
interpretation and thus acquires a fixed type. Models of how coercion is achieved vary.
[Web91] argues that deictic use is an ostensive act, that distinguishes what is pointed to and
what is referred to, which may be the same, but need not be. This ostensive act, functions to reify, or
bring into the set of entities, some part of the interpretations of clauses which were not present in the
set of entities prior to the ostensive act. She uses referring functions8 to model how the reification is
achieved, because they allow the domain of what is pointed to (demonstratum) to be distinguished
from the range of what is referred to (referent):

u

f: D

R , where D is comprised of focused regions of the discourse, and R is a set of possible

interpretations.
In (2.41), the domain of the referring functions are the elements at the right frontier of the
discourse, and function application yields a range of event tokens (things that can happen). By
virtue of the referring action of the function, these new ‘entities’ (event tokens) are added to E.
[Sto94] takes Webber’s model one step further, arguing that a discourse deictic pronoun will
take its referent from the rightmost sibling of the clause in which it is contained, once its clause is
attached or adjoined to the tree. That referent cannot be found in a node that dominates the node
containing the discourse deictic is easy to see, because that would make the deictic self-referential,
as in (2.45), where the indice indicates the discourse segment whose interpretation is the referent

C

of the discourse deictic. As the example makes clear, a discourse deictic cannot almost never refer
to a segment in which it is contained. The only exception is textual deixis, as in (2.46), where the
demonstrative can refer to the text in which it is contained.

is a true sentence.]

%

@C †
% 5yxw

8

@C †
% 5yxw

(2.46) [

is a neat idea.]

%

(2.45) *[

Referring functions have been used by [Nun79] to model how nouns in general achieve their reference.

29
To argue that the referent will not be found in a node that is dominated by the node containing
the discourse deictic, [Sto94] first evokes the use of discourse relations, arguing that if a discourse
deictic refers to a segment, it will also be in a discourse relation with that segment. He then argues
that while discourse deictic reference to embedded clauses might be viewed as an exception to this
generality, this exception can be avoided by replacing Webber’s use of referring functions with a
possible world semantics in which the semantic interpretation of the elements at the right frontier
of the discourse make a variety of ’entity’ interpretations, or “information states”(see [Kra89]),
available to the discourse deictic. For example, he argues that modality in (2.47) makes available
assertions about at least two information states: (1) Mary left, and (2) John thought the context
asserted of (1). The discourse deictic in (2.48a) references the first information state, and that in
(2.48b) references the second information state.
(2.47) John thought Mary left.
(2.48a) He thought this happened yesterday.
(2.48b) This was wrong.
[DH95] take a view similar to [Web91], except they argue that type coercion is just one of the
possible referent-creating operations evoked by the use of a discourse deictic. They argue that each
time an anaphor is used, the degree to which its antecedent is “there” will vary, and the effort needed
to “pick it up” will vary. In their view traditional “coreference” as the most trivial case: the result of
applying the identity relation to the antecedent’s extension. They propose that at least the following
operations are needed to explain how the referent of a discourse deictic is created:
Summation and complex creation:
These operations assemble sets. A set can be assembled by logical conjunction, as in (2.49), or
by other discourse relations, as in (2.50) (brackets indicate the discourse where the operation
creates the antecedent):
(2.49) [Interest rates rose. The recession may reduce inflation. Capital taxation is lower.] This
means brighter times for those who have money to save.
(2.50) [If a white person drives this car it’s a “classic”. If I, a Mexican-American, drives it, it
30

r
is a “low-rider”. ] That hurts my pride.

r

Type coercion:
This operation is as above, when the semantics of an element in the clause containing the
deictic causes an expression to be coerced into one particular interpretation. For example, the
verb can coerce an interpretation, as in (2.51) where “happen” coerces an event interpretation,
or the predicate nominal can coerce an interpretation, as in (2.52).
(2.51) Mary was fired. That happened last week.
(2.52) I turned left. This was a wise decision.
Abstraction and Substitution:

r

The abstraction operation abstract away from specific events, as in (2.53), where the antecedent is “beating one’s wife” not “Smith’s beating his wife”, while the substitution operation substitutes one element of the antecedent for another, as in (2.54), where the antecedent
is “X beats his wife”:
(2.53) Smith beats his wife although this was forbidden 50 years ago.
(2.54) Smith beats his wife and John does it too.
Regardless of whether we assume that clauses already make available a set of semantic values,
or whether we use a referring function or one of any number of operations to represent how these
values are made available, discourse deixis use doesn’t determine which entity interpretation(s) is
(are) chosen as the referent. Within the domain of the right frontier, the semantics of the clause
containing the discourse deictic will determine which of the available objects are selected.
In particular, as [Ash93] notes, the sub-categorization frame of the verb should restrict the possible referents. So while the embedded clause in (2.55) can be interpreted as a variety of abstract
objects, thinks sub-categorizes for a proposition interpretation of “Mary is a genius”, as does the
complex form be certain of. Similarly, happen sub-categorizes for an event interpretation, surprise
sub-categorizes for a fact interpretation.
(2.55) John thinks that [Mary is a genius]. John is certain of it.
31
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse
Thesis about discourse

Contenu connexe

Similaire à Thesis about discourse

Bookpart
BookpartBookpart
Bookparthasan11
 
A Probabilistic Framework For Information Modelling And Retrieval Based On Us...
A Probabilistic Framework For Information Modelling And Retrieval Based On Us...A Probabilistic Framework For Information Modelling And Retrieval Based On Us...
A Probabilistic Framework For Information Modelling And Retrieval Based On Us...Heather Strinden
 
Taylor john garnier_ rowan-understanding mathematical proof-taylor & fr...
Taylor  john  garnier_ rowan-understanding mathematical proof-taylor & fr...Taylor  john  garnier_ rowan-understanding mathematical proof-taylor & fr...
Taylor john garnier_ rowan-understanding mathematical proof-taylor & fr...Vidi Al Imami
 
Misconception in Chemestry Hans Dieter
Misconception in Chemestry Hans DieterMisconception in Chemestry Hans Dieter
Misconception in Chemestry Hans DieterNailul Hasibuan
 
SchwarzentruberThesis2016
SchwarzentruberThesis2016SchwarzentruberThesis2016
SchwarzentruberThesis2016Adrianne Hines
 
HaiqingWang-MasterThesis
HaiqingWang-MasterThesisHaiqingWang-MasterThesis
HaiqingWang-MasterThesisHaiqing Wang
 
Mechanising_Programs_in_IsabelleHOL
Mechanising_Programs_in_IsabelleHOLMechanising_Programs_in_IsabelleHOL
Mechanising_Programs_in_IsabelleHOLAnkit Verma
 
A first course in linear algebra robert a. beezer university of puget sound v...
A first course in linear algebra robert a. beezer university of puget sound v...A first course in linear algebra robert a. beezer university of puget sound v...
A first course in linear algebra robert a. beezer university of puget sound v...chen john
 

Similaire à Thesis about discourse (20)

Bookpart
BookpartBookpart
Bookpart
 
A Probabilistic Framework For Information Modelling And Retrieval Based On Us...
A Probabilistic Framework For Information Modelling And Retrieval Based On Us...A Probabilistic Framework For Information Modelling And Retrieval Based On Us...
A Probabilistic Framework For Information Modelling And Retrieval Based On Us...
 
rosario_phd_thesis
rosario_phd_thesisrosario_phd_thesis
rosario_phd_thesis
 
Master Thesis
Master ThesisMaster Thesis
Master Thesis
 
Taylor john garnier_ rowan-understanding mathematical proof-taylor & fr...
Taylor  john  garnier_ rowan-understanding mathematical proof-taylor & fr...Taylor  john  garnier_ rowan-understanding mathematical proof-taylor & fr...
Taylor john garnier_ rowan-understanding mathematical proof-taylor & fr...
 
Understanding Mathematical Proof
Understanding Mathematical ProofUnderstanding Mathematical Proof
Understanding Mathematical Proof
 
Misconception in Chemestry Hans Dieter
Misconception in Chemestry Hans DieterMisconception in Chemestry Hans Dieter
Misconception in Chemestry Hans Dieter
 
207 morphbooklet
207 morphbooklet207 morphbooklet
207 morphbooklet
 
LTMR
LTMRLTMR
LTMR
 
SchwarzentruberThesis2016
SchwarzentruberThesis2016SchwarzentruberThesis2016
SchwarzentruberThesis2016
 
HaiqingWang-MasterThesis
HaiqingWang-MasterThesisHaiqingWang-MasterThesis
HaiqingWang-MasterThesis
 
dmo-phd-thesis
dmo-phd-thesisdmo-phd-thesis
dmo-phd-thesis
 
Thesis
ThesisThesis
Thesis
 
2013McGinnissPhD
2013McGinnissPhD2013McGinnissPhD
2013McGinnissPhD
 
programacion funcional.pdf
programacion funcional.pdfprogramacion funcional.pdf
programacion funcional.pdf
 
Mechanising_Programs_in_IsabelleHOL
Mechanising_Programs_in_IsabelleHOLMechanising_Programs_in_IsabelleHOL
Mechanising_Programs_in_IsabelleHOL
 
Thesis
ThesisThesis
Thesis
 
A first course in linear algebra robert a. beezer university of puget sound v...
A first course in linear algebra robert a. beezer university of puget sound v...A first course in linear algebra robert a. beezer university of puget sound v...
A first course in linear algebra robert a. beezer university of puget sound v...
 
FULLTEXT01.pdf
FULLTEXT01.pdfFULLTEXT01.pdf
FULLTEXT01.pdf
 
thesis
thesisthesis
thesis
 

Plus de blessedkkr

Population growth
Population growthPopulation growth
Population growthblessedkkr
 
Food for war typos
Food for war typosFood for war typos
Food for war typosblessedkkr
 
Extraterrestrial life
Extraterrestrial lifeExtraterrestrial life
Extraterrestrial lifeblessedkkr
 
C class test no1 1ere a
C class test no1 1ere aC class test no1 1ere a
C class test no1 1ere ablessedkkr
 
Behavior and health correction
Behavior and health correctionBehavior and health correction
Behavior and health correctionblessedkkr
 
Behavior and health
Behavior and healthBehavior and health
Behavior and healthblessedkkr
 
A demographic timebomb correction
A demographic timebomb correctionA demographic timebomb correction
A demographic timebomb correctionblessedkkr
 
A demographic timebomb
A demographic timebombA demographic timebomb
A demographic timebombblessedkkr
 
Sujets anglais bac_a2_2013_old1
Sujets anglais bac_a2_2013_old1Sujets anglais bac_a2_2013_old1
Sujets anglais bac_a2_2013_old1blessedkkr
 
Sujets anglais bac_a2_2013
Sujets anglais bac_a2_2013Sujets anglais bac_a2_2013
Sujets anglais bac_a2_2013blessedkkr
 
Sujets anglais bac_a1-a2_2013
Sujets anglais bac_a1-a2_2013Sujets anglais bac_a1-a2_2013
Sujets anglais bac_a1-a2_2013blessedkkr
 
Sujet bepc anglais_zone_3_2013_old1
Sujet bepc anglais_zone_3_2013_old1Sujet bepc anglais_zone_3_2013_old1
Sujet bepc anglais_zone_3_2013_old1blessedkkr
 
Sujet bepc anglais_zone_3_2013
Sujet bepc anglais_zone_3_2013Sujet bepc anglais_zone_3_2013
Sujet bepc anglais_zone_3_2013blessedkkr
 
Sujet anglais bac-a1_et_a2
Sujet anglais bac-a1_et_a2Sujet anglais bac-a1_et_a2
Sujet anglais bac-a1_et_a2blessedkkr
 
Sujet anglais zone_2_2010
Sujet anglais zone_2_2010Sujet anglais zone_2_2010
Sujet anglais zone_2_2010blessedkkr
 
Sujet anglais zone_1_2010_old1
Sujet anglais zone_1_2010_old1Sujet anglais zone_1_2010_old1
Sujet anglais zone_1_2010_old1blessedkkr
 
Sujet anglais zone_1_2010
Sujet anglais zone_1_2010Sujet anglais zone_1_2010
Sujet anglais zone_1_2010blessedkkr
 
Sujet anglais bepc_zone_3_2012_old1
Sujet anglais bepc_zone_3_2012_old1Sujet anglais bepc_zone_3_2012_old1
Sujet anglais bepc_zone_3_2012_old1blessedkkr
 

Plus de blessedkkr (20)

Population growth
Population growthPopulation growth
Population growth
 
Food for war typos
Food for war typosFood for war typos
Food for war typos
 
Food for war
Food for warFood for war
Food for war
 
Extraterrestrial life
Extraterrestrial lifeExtraterrestrial life
Extraterrestrial life
 
C class test no1 1ere a
C class test no1 1ere aC class test no1 1ere a
C class test no1 1ere a
 
Behavior and health correction
Behavior and health correctionBehavior and health correction
Behavior and health correction
 
Behavior and health
Behavior and healthBehavior and health
Behavior and health
 
Apartheid
ApartheidApartheid
Apartheid
 
A demographic timebomb correction
A demographic timebomb correctionA demographic timebomb correction
A demographic timebomb correction
 
A demographic timebomb
A demographic timebombA demographic timebomb
A demographic timebomb
 
Sujets anglais bac_a2_2013_old1
Sujets anglais bac_a2_2013_old1Sujets anglais bac_a2_2013_old1
Sujets anglais bac_a2_2013_old1
 
Sujets anglais bac_a2_2013
Sujets anglais bac_a2_2013Sujets anglais bac_a2_2013
Sujets anglais bac_a2_2013
 
Sujets anglais bac_a1-a2_2013
Sujets anglais bac_a1-a2_2013Sujets anglais bac_a1-a2_2013
Sujets anglais bac_a1-a2_2013
 
Sujet bepc anglais_zone_3_2013_old1
Sujet bepc anglais_zone_3_2013_old1Sujet bepc anglais_zone_3_2013_old1
Sujet bepc anglais_zone_3_2013_old1
 
Sujet bepc anglais_zone_3_2013
Sujet bepc anglais_zone_3_2013Sujet bepc anglais_zone_3_2013
Sujet bepc anglais_zone_3_2013
 
Sujet anglais bac-a1_et_a2
Sujet anglais bac-a1_et_a2Sujet anglais bac-a1_et_a2
Sujet anglais bac-a1_et_a2
 
Sujet anglais zone_2_2010
Sujet anglais zone_2_2010Sujet anglais zone_2_2010
Sujet anglais zone_2_2010
 
Sujet anglais zone_1_2010_old1
Sujet anglais zone_1_2010_old1Sujet anglais zone_1_2010_old1
Sujet anglais zone_1_2010_old1
 
Sujet anglais zone_1_2010
Sujet anglais zone_1_2010Sujet anglais zone_1_2010
Sujet anglais zone_1_2010
 
Sujet anglais bepc_zone_3_2012_old1
Sujet anglais bepc_zone_3_2012_old1Sujet anglais bepc_zone_3_2012_old1
Sujet anglais bepc_zone_3_2012_old1
 

Dernier

The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesManik S Magar
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
Français Patch Tuesday - Avril
Français Patch Tuesday - AvrilFrançais Patch Tuesday - Avril
Français Patch Tuesday - AvrilIvanti
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
Digital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentDigital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentMahmoud Rabie
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
WomenInAutomation2024: AI and Automation for eveyone
WomenInAutomation2024: AI and Automation for eveyoneWomenInAutomation2024: AI and Automation for eveyone
WomenInAutomation2024: AI and Automation for eveyoneUiPathCommunity
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Karmanjay Verma
 
Infrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsInfrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsYoss Cohen
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Mark Simos
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
Accelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessAccelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessWSO2
 
Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024TopCSSGallery
 

Dernier (20)

The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
Français Patch Tuesday - Avril
Français Patch Tuesday - AvrilFrançais Patch Tuesday - Avril
Français Patch Tuesday - Avril
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
Digital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentDigital Tools & AI in Career Development
Digital Tools & AI in Career Development
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
WomenInAutomation2024: AI and Automation for eveyone
WomenInAutomation2024: AI and Automation for eveyoneWomenInAutomation2024: AI and Automation for eveyone
WomenInAutomation2024: AI and Automation for eveyone
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#
 
Infrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsInfrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platforms
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
Accelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessAccelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with Platformless
 
Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024
 

Thesis about discourse

  • 1. DISCOURSE SEMANTICS OF S-MODIFYING ADVERBIALS Katherine M. Forbes A DISSERTATION in Linguistics Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy 2003 Bonnie Webber, Supervisor of Dissertation Ellen Prince, Supervisor of Dissertation Donald A. Ringe, Graduate Group Chair Aravind Joshi, Committee Member Robin Clark, Committee Member
  • 2. Acknowledgements I wish to thank Bonnie Webber. Without her patience and her seemingly endless depths of insight, I might never have completed this thesis. I am enormously grateful for her guidance. I also owe many thanks to Ellen Prince. She is an intellectual leader at Penn who has helped many, including me, find a way through the jungle of discourse analysis. I am indebted to every professor who has taught me. Special thanks to Robin Clark for being a member of my dissertation committee. I am very lucky to have worked with Aravind Joshi. He is a continual source of knowledge in the DLTAG meetings. The field of computational linguistics has already benefited from his sentencelevel work; I fully expect he and Bonnie will produce similarly useful results with DLTAG. Also in DLTAG, Eleni Miltsakaki and Rashmi Prasad, and later Cassandre Creswell and Jason Teeple all provided stimulation and solace. Their great company and great effort on DLTAG projects taught me to appreciate how much can be done when minds work together. I look forward to the chance to work with them in the future. I am also thankful to Martha Palmer, Paul Kingsbury, and Scott Cotton for allowing me to work with them on the Propbank project and supplement both my income and my work in discourse. On a personal note, the Forbes, Finley, and Riley families deserve thanks for giving me love and diversion and balance and talking me through my education. Most of all, thanks to Enrico Riley, for being everything to me. ii
  • 3. ABSTRACT DISCOURSE SEMANTICS OF S-MODIFYING ADVERBIALS Katherine M. Forbes Supervisors: Bonnie Webber and Ellen Prince In this thesis, we address the question of why certain S-modifying adverbials are only interpretable with respect to the discourse or spatio-temporal context, and not just their own matrix clause. It is not possible to list these adverbials because the set of adverbials is compositional and therefore infinite. Instead, we investigate the mechanisms underlying their interpretation. We present a corpusbased analysis of the predicate argument structure and interpretation of over 13,000 S-modifying adverbials. We use prior research on discourse deixis and clause-level predicates to study the semantics of the arguments of S-modifying adverbials and the syntactic constituents from which they can be derived. We show that many S-modifying adverbials contain semantic arguments that may not be syntactically overt, but whose interpretation nevertheless requires an abstract object from the discourse or spatio-temporal context. Prior work has investigated only a small subset of these discourse connectives; at the clause-level their semantics has been largely ignored and at the discourse level they are usually treated as “signals” of predefined lists of abstract discourse relations. Our investigation sheds light on the space of relations imparted by a much wider variety of adverbials. We further show how their predicate argument structure and interpretation can be formalized and incorporated into a rich intermediate model of discourse that alone among other models views discourse connectives as predicates whose syntax and semantics must be specified and recoverable to interpret discourse. It is not only due to their argument structure and interpretation that adverbials have been treated as discourse connectives, however. Our corpus contains adverbials whose semantics alone does not cause them to be interpreted with respect to abstract object interpretations in the discourse or spatio-temporal context. We explore other explanations for why these adverbials evoke discourse context for their interpretation; in particular, we show how the interaction of prosody with the interpretation of S-modifying adverbials can contribute to discourse coherence, and we also show how S-modifying adverbials can be used to convey implicatures. iii
  • 4. Contents Acknowledgements ii Abstract iii Contents iv List of Tables x List of Figures xiv 1 Introduction 1 1.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contributions of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Anaphora and Discourse Models 6 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Descriptive Theories of Discourse Coherence . . . . . . . . . . . . . . . . . . . . 8 2.2.1 An Early Encompassing Description . . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Alternative Descriptions of Propositional Relations . . . . . . . . . . . . . 10 2.2.3 Discourse Relations as Constraints . . . . . . . . . . . . . . . . . . . . . . 12 2.2.4 Abducing Discourse Relations by Applying the Constraints . . . . . . . . 14 2.2.5 Interaction of Discourse Inference and VP Ellipsis . . . . . . . . . . . . . 17 iv
  • 5. 2.2.6 19 Coherence within Discourse Segments . . . . . . . . . . . . . . . . . . . . 22 Modeling Linguistic Structure and Attentional State as a Tree . . . . . . . 23 Introduction to Discourse Deictic Reference . . . . . . . . . . . . . . . . . 25 2.3.5 Retrieving Antecedents of Discourse Deixis from the Tree . . . . . . . . . 27 2.3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 A Tree Structure with a Syntax-Semantic Interface . . . . . . . . . . . . . . . . . 33 2.4.1 Constituents and Tree Construction . . . . . . . . . . . . . . . . . . . . . 33 2.4.2 The Syntax Semantic Interface . . . . . . . . . . . . . . . . . . . . . . . . 34 2.4.3 Retrieving Antecedents of Anaphora from the Tree . . . . . . . . . . . . . 36 2.4.4 The Need For Upward Percolation . . . . . . . . . . . . . . . . . . . . . . 36 2.4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 A Descriptive Theory of Discourse Structure . . . . . . . . . . . . . . . . . . . . . 38 2.5.1 Analyzing Text Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.5.2 The Need for Multiple Levels of Discourse Structure . . . . . . . . . . . . 42 2.5.3 “Elaboration” as Reference . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 A Semantic Theory of Discourse Coherence . . . . . . . . . . . . . . . . . . . . . 48 2.6.1 Abstract Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.6.2 A Formal Language for Discourse . . . . . . . . . . . . . . . . . . . . . . 53 2.6.3 Retrieving Antecedents of Anaphora from the Discourse Structure . . . . . 57 2.6.4 A System for Inferring Discourse Relations . . . . . . . . . . . . . . . . . 57 2.6.5 Extending the Theory to Cognitive States . . . . . . . . . . . . . . . . . . 62 2.6.6 2.7 The Three Tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 2.6 19 2.3.3 2.5 A Three-Tiered Model of Discourse . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 2.4 18 2.3.1 2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 2.7.1 64 Proliferation of Discourse Relations . . . . . . . . . . . . . . . . . . . . . v
  • 6. 2.7.2 66 2.7.3 Structural and Anaphoric Cue Phrases . . . . . . . . . . . . . . . . . . . . 69 2.7.4 Comparison of DLTAG and Other Models . . . . . . . . . . . . . . . . . . 71 2.7.5 2.8 Use of Linguistic Cues as Signals . . . . . . . . . . . . . . . . . . . . . . Remaining Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3 Semantic Mechanisms in Adverbials 78 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.2 Linguistic Background and Data Collection . . . . . . . . . . . . . . . . . . . . . 79 3.2.1 Function of Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.2.2 Structure of PP and ADVP . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.2.3 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Adverbial Modification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.3.1 Clause-Level Analyses of Modification Type . . . . . . . . . . . . . . . . 87 3.3.2 Problems with Categorical Approaches . . . . . . . . . . . . . . . . . . . 91 3.3.3 Modification Types as Semantic Features . . . . . . . . . . . . . . . . . . 92 3.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Adverbial Semantic Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.4.1 (Optional) Arguments or Adjuncts? . . . . . . . . . . . . . . . . . . . . . 94 3.4.2 External Argument Attachment Ambiguity . . . . . . . . . . . . . . . . . 98 3.4.3 Semantic Representation of External Argument . . . . . . . . . . . . . . . 103 3.4.4 Semantic Arguments as Abstract Objects . . . . . . . . . . . . . . . . . . 104 3.4.5 Number of Abstract Objects . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.3 3.4 3.5 S-Modifying PP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.5.1 Proper Nouns, Possessives, and Pronouns . . . . . . . . . . . . . . . . . . 111 3.5.2 Demonstrative and Definite Determiners . . . . . . . . . . . . . . . . . . . 114 3.5.3 Indefinite Articles, Generic and Plural Nouns, and Optional Arguments . . 117 vi
  • 7. 3.5.4 3.5.5 Other Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 3.5.6 3.6 PP and ADJP Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 S-Modifying ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3.6.1 3.6.2 Context-Dependent ADVP Adverbials . . . . . . . . . . . . . . . . . . . . 139 3.6.3 Comparative ADVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.6.4 Sets and Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 3.6.5 3.7 Syntactically Optional Arguments . . . . . . . . . . . . . . . . . . . . . . 135 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 4 Incorporating Adverbial Semantics into DLTAG 157 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 4.2 Syntax-Semantic Interfaces at the Sentence Level . . . . . . . . . . . . . . . . . . 158 4.2.1 4.2.2 LTAG: Lexicalized Tree Adjoining Grammar . . . . . . . . . . . . . . . . 159 4.2.3 A Syntax-Semantic Interface for LTAG Derivation Trees . . . . . . . . . . 161 4.2.4 A Syntax-Semantic Interface for LTAG Elementary Trees . . . . . . . . . 166 4.2.5 Comparison of Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 169 4.2.6 4.3 The Role of the Syntax-Semantic Interface . . . . . . . . . . . . . . . . . 158 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Syntax-Semantic Interfaces at the Discourse Level . . . . . . . . . . . . . . . . . . 171 4.3.1 4.3.2 Syntax-Semantic Interfaces for Derived Trees . . . . . . . . . . . . . . . . 179 4.3.3 A Syntax-Semantic Interface for DLTAG Derivation Trees . . . . . . . . . 190 4.3.4 Comparison of Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 218 4.3.5 4.4 DLTAG: Lexicalized Tree Adjoining Grammar for Discourse . . . . . . . . 171 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 DLTAG Annotation Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 4.4.1 Overview of Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 4.4.2 Preliminary Study 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 vii
  • 8. 4.4.3 4.4.4 4.5 Preliminary Study 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 5 Other Ways Adverbials Contribute to Discourse Coherence 229 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 5.2 Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 5.2.1 5.2.2 Information-Structure and Theories of Structured Meanings . . . . . . . . 232 5.2.3 Alternative Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 5.2.4 Backgrounds or Alternatives? . . . . . . . . . . . . . . . . . . . . . . . . 237 5.2.5 Contrastive Themes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 5.2.6 5.3 The Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Focus Sensitivity of Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 5.3.1 5.3.2 Other Focus Sensitive Sub-Clausal Modifiers . . . . . . . . . . . . . . . . 244 5.3.3 S-Modifying “Focus Particles” . . . . . . . . . . . . . . . . . . . . . . . . 249 5.3.4 Focus Sensivity of S-Modifying Adverbials . . . . . . . . . . . . . . . . . 254 5.3.5 Focusing S-Modifying Adverbials to Evoke Context . . . . . . . . . . . . 258 5.3.6 5.4 Focus Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Implicatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 5.4.1 5.4.2 Pragmatic and Semantic Presupposition . . . . . . . . . . . . . . . . . . . 267 5.4.3 5.5 Gricean Implicature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Using S-Modifying Adverbials to Convey Implicatures . . . . . . . . . . . . . . . 270 5.5.1 Presupposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 5.5.2 Conversational Implicatures . . . . . . . . . . . . . . . . . . . . . . . . . 271 5.5.3 Interaction of Focus and Implicature . . . . . . . . . . . . . . . . . . . . . 277 5.5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 viii
  • 9. 5.6 Other Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 5.6.1 5.6.2 5.7 Discourse Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Performatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6 Conclusion 279 6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 6.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Bibliography 285 ix
  • 10. List of Tables 2.1 Main Categories of [HH76]’s Relations between Propositions . . . . . . . . . . . . 10 2.2 Main Categories of [Lon83]’s Relations between Propositions . . . . . . . . . . . 10 2.3 Main Categories of [Mar92]’s Relations between Propositions . . . . . . . . . . . 11 2.4 Main Categories of [Hob90]’s Relations between Propositions . . . . . . . . . . . 12 2.5 [Keh95]’s Cause-Effect Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.6 [Keh95]’s Resemblance Relations . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.7 [GS86] Changes in Discourse Structure Indicated by Linguistic Expressions . . . . 21 2.8 Centering Theory Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.9 [Web91]’s Classification of Discourse Deictic Reference . . . . . . . . . . . . . . 26 2.10 Organizations of RST Relation Definitions . . . . . . . . . . . . . . . . . . . . . 39 2.11 Evidence: RST Relation Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.12 Volitional-Cause: RST Relation Definition . . . . . . . . . . . . . . . . . . . . . . 42 2.13 Elaboration: RST Relation Definition . . . . . . . . . . . . . . . . . . . . . . . . 44 2.14 [Ven67]’s Imperfect and Perfect Nominalizations . . . . . . . . . . . . . . . . . . 49 2.15 [Ven67]’s Loose and Narrow Containers . . . . . . . . . . . . . . . . . . . . . . . 50 2.16 DICE: discourse relation definitions . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.17 DICE: Indefeasible axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.18 DICE: Defeasible laws on world knowledge . . . . . . . . . . . . . . . . . . . . . 59 2.19 DICE: Defeasible laws on discourse processes . . . . . . . . . . . . . . . . . . . . 60 2.20 DICE: Deduction rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.21 [Kno96]’s Features of Discourse Connectives . . . . . . . . . . . . . . . . . . . . 68 x
  • 11. 3.1 Non-Derived and Derived Adverbs . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.2 tgrep Results for S-Adjoined ADVP and PP in WSJ and Brown Corpora . . . . . . 85 3.3 Total S-Adjoined Adverbials in WSJ and Brown Corpora . . . . . . . . . . . . . . 86 3.4 [Ale97]’s Modification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.5 [Ern84]’s Modification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.6 [KP02]’s Modification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.7 [Gre69]’s Syntactic Tests for Distinguishing VP and S Modification . . . . . . . . 99 3.8 Semantic Interpretations of [Ern84]’s Modification Types . . . . . . . . . . . . . . 103 3.9 Abstract Object Interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.10 Approximate Counts of Tokens and Types of some Internal PP arguments . . . . . 111 3.11 PP Adverbials with Proper Noun or Year Internal Argument . . . . . . . . . . . . 111 3.12 PP Adverbial with Possessive Proper Noun Internal Argument . . . . . . . . . . . 112 3.13 PP Adverbials with Pronominal Internal Argument . . . . . . . . . . . . . . . . . 113 3.14 PP Adverbial with Possessive Pronoun . . . . . . . . . . . . . . . . . . . . . . . . 114 3.15 Approximate Counts of Tokens and Types of some Internal PP arguments . . . . . 114 3.16 PP Adverbials with Definite Concrete Object Internal Argument . . . . . . . . . . 115 3.17 PP Adverbials with Definite AO Internal Argument . . . . . . . . . . . . . . . . . 115 3.18 PP Adverbials with Demonstrative Concrete Object Internal Argument . . . . . . . 116 3.19 PP Adverbials with Demonstrative AO Internal Arguments . . . . . . . . . . . . . 117 3.20 Approximate Counts of Tokens and Types of some Internal PP arguments . . . . . 118 3.21 PP Adverbial with Indefinite Concrete Object Internal Argument . . . . . . . . . . 118 3.22 PP Adverbial with Indefinite AO Internal Argument . . . . . . . . . . . . . . . . . 118 3.23 PP Adverbial with Relational Indefinite AO Internal Argument . . . . . . . . . . . 119 3.24 PP Adverbials with Generic or Plural Concrete Object Internal Arguments . . . . . 122 3.25 PP Adverbials with Generic or Plural AO Internal Arguments . . . . . . . . . . . . 122 3.26 PP Adverbials with Relational Generic AO Internal Arguments . . . . . . . . . . . 123 3.27 Approximate Counts of Tokens and Types of some Internal Argument Modifiers . . 124 3.28 Binary Definite Internal Argument with Overt Argument . . . . . . . . . . . . . . 125 xi
  • 12. 3.29 Binary Indefinite Internal Argument with Overt Argument . . . . . . . . . . . . . 125 3.30 Binary Generic or Plural Internal Argument with Overt Argument . . . . . . . . . 126 3.31 Internal Argument with a Spatio-Temporal ADJ . . . . . . . . . . . . . . . . . . . 126 3.32 Internal Argument with Referential Adjective . . . . . . . . . . . . . . . . . . . . 127 3.33 Internal Argument with Non-Referential Adjective . . . . . . . . . . . . . . . . . 128 3.34 Internal Argument with Determiner and Non-Referential Adjective . . . . . . . . . 128 3.35 Internal Argument with Ordinal Adjective . . . . . . . . . . . . . . . . . . . . . . 129 3.36 Internal Argument with Alternative Phrase . . . . . . . . . . . . . . . . . . . . . . 129 3.37 Internal Argument with Determiner and Alternative Phrase . . . . . . . . . . . . . 129 3.38 Internal Argument with Comparative/Superlative Adjective . . . . . . . . . . . . . 130 3.39 Internal Argument with Other Set-Evoking Adjectives . . . . . . . . . . . . . . . . 131 3.40 Approximate Counts of Tokens and Types of some Internal PP arguments . . . . . 131 3.41 PP Adverbial with Reduced Clause Internal Argument . . . . . . . . . . . . . . . 132 3.42 PP Adverbial Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 3.43 Approximate Counts of Tokens and Types of some ADVP Adverbials . . . . . . . 135 3.44 Mis-Tagged PP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3.45 PP-like ADVP Adverbials with Overt Arguments . . . . . . . . . . . . . . . . . . 136 3.46 PP-like ADVP Adverbials with Hidden Argument . . . . . . . . . . . . . . . . . . 136 3.47 Relational ADJP with Overt Argument . . . . . . . . . . . . . . . . . . . . . . . . 137 3.48 Relational ADVP Adverbials with Hidden Argument . . . . . . . . . . . . . . . . 138 3.49 Approximate Counts of Tokens and Types of some ADVP Adverbials . . . . . . . 139 3.50 ADVP Adverbial Conjunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 3.51 Mis-Tagged PP Adverbial Constructions . . . . . . . . . . . . . . . . . . . . . . . 142 3.52 Spatio-Temporal ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . 143 3.53 Another Spatio-Temporal ADVP Adverbial . . . . . . . . . . . . . . . . . . . . . 143 3.54 Other Spatio-Temporal ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . 143 3.55 Spatio-Temporal Manner ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . 144 3.56 Deictic ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 xii
  • 13. 3.57 Deictic-Derived ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . 145 3.58 Approximate Counts of Tokens and Types of some ADVP Adverbials . . . . . . . 146 3.59 Comparative Adverb Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.60 Comparative ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 3.61 Specified Comparative ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . 147 3.62 Comparative-Derived ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . 148 3.63 Comparative Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 3.64 Approximate Counts of Tokens and Types of some ADVP Adverbials . . . . . . . 150 3.65 Ordinal ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 3.66 Ordinal -ly ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 3.67 Frequency ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 3.68 Epistemic ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 3.69 Domain ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 3.70 Non-Specific Set-Evoking ADVP Adverbials . . . . . . . . . . . . . . . . . . . . 153 3.71 Multiply-Featured ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . . . . . 153 3.72 More Multiply-Featured ADVP Adverbials . . . . . . . . . . . . . . . . . . . . . 153 3.73 Evaluative or Agent-Oriented ADVP Adverbials . . . . . . . . . . . . . . . . . . . 154 3.74 ADVP Adverbial Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155   4.1 Nine Connectives Studied in [CFM 02] . . . . . . . . . . . . . . . . . . . . . . . 223 4.2 Annotation Tags for the Nine Connectives Studied in [CFM 02] . . . . . . . . . . 224 4.3 LOC Tag Values 4.4 Inter-Annotator Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 5.1 ADVP/PP Adverbials with Focus Particle Modifier . . . . . . . . . . . . . . . . . 260 5.2 Higher-Ordered Epistemic Adverbials Yielding Implicatures . . . . . . . . . . . . 271 5.3 Lower-Ordered Epistemic Adverbials Yielding Implicatures . . . . . . . . . . . . 273 5.4 Lower-Ordered Quantificational Adverbials Yielding Implicatures . . . . . . . . . 274 5.5 Higher-Ordered Quantificational Adverbials Yielding Implicatures . . . . . . . . . 274   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 xiii
  • 14. List of Figures 2.1 [HH76]’s Types of Cohesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Illustration of [GS86]’s Discourse Model . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Illustration of [Web91]’s Attachment Operation . . . . . . . . . . . . . . . . . . . 24 2.4 Illustration of [Web91]’s Adjunction Operation . . . . . . . . . . . . . . . . . . . 24 2.5 LDM Right-Attachment Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6 LDM Tree Structure for Example (2.58) . . . . . . . . . . . . . . . . . . . . . . . 36 2.7 LDM Tree Structure for Example (2.59) . . . . . . . . . . . . . . . . . . . . . . . 37 2.8 RST Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.9 Evidence Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.10 RST Condition and Motivation Relations . . . . . . . . . . . . . . . . . . . . . . 43 2.11 [KOOM01]’s Discourse Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.12 [Ash93]’s Classification of Abstract Objects . . . . . . . . . . . . . . . . . . . . . 52 2.13 Sample DRSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.14 Sample SDRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.15 Elementary DLTAG Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.1 S-Adjoining PP and ADVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.2 S-Adjoined Discourse and Clausal Adverbials . . . . . . . . . . . . . . . . . . . . 84 3.3 S-Adjoined ADVP and PP Adverbials in Penn Treebank I . . . . . . . . . . . . . . 84 3.4 [Ash93]’s Classification of Abstract Objects . . . . . . . . . . . . . . . . . . . . . 105 3.5 Syntactic Structure of S-Modifying PP Adverbials . . . . . . . . . . . . . . . . . . 110 xiv
  • 15. 3.6 Syntactic Structure of S-Modifying ADVP Adverbials . . . . . . . . . . . . . . . . 134 4.1 Elementary LTAG Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 4.2 LTAG Derived Tree after Substitions . . . . . . . . . . . . . . . . . . . . . . . . . 160 4.3 LTAG Derived Tree After Adjunction . . . . . . . . . . . . . . . . . . . . . . . . 161 4.4 LTAG Derivation Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 4.5 Semantic Representations of 4.6 Semantic Representations of John walks . . . . . . . . . . . . . . . . . . . . . . . 163 4.7 Semantic Representations of 4.8 Semantic Representations of John often walks Fido . . . . . . . . . . . . . . . . . 164 4.9 Simplified Semantic Representation of ) % # ('¥$¡ ! ¡ ©¨§¥£¡ ¦¤ ¢ and , and 32 0 94! , ) ¦¤ ¢ ©¨§¥£¡ , . . . . . . . . . . . . . . . . . . . 163 32 0 541 ¡ and . . . . . . . . . . . 164 . . . . . . . . . . . . . . 165 ¦¤ ¢ 8¨76$¡ 4.10 The Elementary Tree for slide . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 F ECA G9DB@ 4.11 The Syntax-Semantic Interface for . . . . . . . . . . . . . . . . . . . . . . . 168 4.12 DLTAG Initial Trees for Subordinating Conjunctions . . . . . . . . . . . . . . . . 172 H 4.13 DLTAG Auxiliary Tree for and and . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.14 DLTAG Auxiliary Trees for Discourse Adverbials . . . . . . . . . . . . . . . . . . 174 4.15 DLTAG Initial Tree for Adverbial Constructions . . . . . . . . . . . . . . . . . . . 177 4.16 DLTAG Derived Tree for Example (4.18) . . . . . . . . . . . . . . . . . . . . . . 178 4.17 DLTAG Derivation Tree for Example (4.18) . . . . . . . . . . . . . . . . . . . . . 178 4.18 Illustration of [Web91]’s Attachment and Adjunction Operations . . . . . . . . . . 179 4.19 Webber’s Adjunction at a Leaf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 4.20 Derived Tree for Example (2.41) . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 4.21 Substitution in FTAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 4.22 Adjunction in FTAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 4.23 LDM Elementary DCU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 4.24 DTAG Elementary DCU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 4.25 LDM List Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 4.26 DTAG R Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 4.27 [Gar97b]’s -Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 E xv
  • 16. 4.28 [Gar97b]’s -Adjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 E 4.29 First DTAG Derivation of Example (4.21) . . . . . . . . . . . . . . . . . . . . . . 186 4.30 Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . . . . . . . . . 186 4.31 Step One in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . 187 4.32 Step Two in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . 187 4.33 Step Three in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . 188 4.34 Step Four in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . 188 4.35 Step Five in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . 189 4.36 Step Six in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . . 189 4.37 Step Seven in the Second DTAG Derivation of Example (4.21) . . . . . . . . . . . 190 4.38 DLTAG Elementary Trees for Example (4.22) . . . . . . . . . . . . . . . . . . . . 191 , and ¦¤ ¢ 8¨§¥$¡ 4.39 Semantic Representation of . . . . . . . . . . . . . . . 192 3 ¨ ¦ 3 S¤Q3I !T8%$¡ 9(RP$¡ 4.40 DLTAG Derived and Derivation Trees and Semantic Representation for (4.22) . . . 192 4.41 DLTAG Elementary Trees for Example (4.24) . . . . . . . . . . . . . . . . . . . . 193 , and % ¨ 54V$¡ 33 9 ¡ U(¤ ) 4.42 Semantic Representation of . . . . . . . . . . . . . . . . . . 193 4.43 DLTAG Derived and Derivation Trees and Semantic Representation for (4.24) . . . 194 4.44 DLTAG Derived and Derivation Trees and Semantic Representation for (4.26) . . . 194 4.45 DLTAG Derived and Derivation Trees and Semantic Representation for (4.28) . . . 195 4.46 LTAG Derived and Derivation Trees for Example (4.30) . . . . . . . . . . . . . . . 196 4.47 Quantifiers in French . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 4.48 [Kal02]’s -Edges for Quantifiers in French . . . . . . . . . . . . . . . . . . . . . 197 F 4.49 [Kal02]’s -Derivation Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 F 4.50 DLTAG Derived Tree and -Derivation Graph for Example (4.28) . . . . . . . . . 199 F 4.51 Additional Semantic Representation for (4.28) due to -Derivation Graph . . . . . 199 F 4.52 DLTAG Derived Tree and -Derivation Graph for Example (4.32) . . . . . . . . . 200 F 4.53 Flexible Composition in LTAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 4.54 DLTAG Derived and Derivation Trees and Semantic Representation for (4.28) . . . 203 4.55 DLTAG Elementary Trees for Example (4.33) . . . . . . . . . . . . . . . . . . . . 204 xvi
  • 17. , X Y) ) , , and ¢3% a `32 5'(!94T% ¡ % W¤ §6¢ ¡ '42 , . . . . . 204 `3 % Q 5('RbR$¡ ) T% 4.56 Semantic Representation of 4.57 DLTAG Derived and Derivation Trees, -Derivation Graph and Semantics for (4.33) 205 F 4.58 DLTAG Elementary Trees for Example (4.35) . . . . . . . . . . . . . . . . . . . . 206 , % T0 ¡ X e) , and 3 ¦ 918f¡ ) dW49B55URQ ¦2 3 Sc3 4.59 Semantic Representation of . . . . . . . . . . . . 207 4.60 DLTAG Derived and Derivation Trees, -Derivation Graph, and Semantics for (4.35) 207 F 4.61 DLTAG Elementary Trees for Example (4.37) . . . . . . . . . . . . . . . . . . . . 208 , , X Y) , , % T0 ¡ 2 ¤ 2 0 h ¦ S 3 ` §4£¡ 1Yi82gR!1£¡ ¤ ) , and 3 ¦ 5!©$¡ ) 7¤ 4.62 Semantic Representation of . . . . . 208 4.63 DLTAG Derived and Derivation Trees, -Derivation Graph and Semantics for (4.37) 209 F d2g91$¡ ¦ S3` 4.64 DLTAG Elementary Tree and Semantic Representation for in (4.39) . . . . 210 4.65 DLTAG Derived and Derivation Trees, -Derivation Graph and Semantics for (4.39) 210 F 4.66 DLTAG Derivation Tree and -Derivation Graph for Example (4.41) . . . . . . . . 212 , , ) , ¨`¤ 1§PI p W`3a ($¡ !9§53 ¡ !9§53 W`3a ) F 4.67 Elementary LTAG Trees and Semantic Representations of 214 4.68 Elementary DLTAG Trees for Example for example . . . . . . . . . . . . . . . . . 215 4.69 Derivation Trees for PP Discourse Adverbials with Quantified Internal Arguments . 216 4.70 DLTAG Derived and Derivation Trees for (4.32) . . . . . . . . . . . . . . . . . . . 218 4.71 Another Representation of the R Tree in Figure 4.26 5.1 . . . . . . . . . . . . . . . . 219 Gricean Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 xvii
  • 18. Chapter 1 Introduction 1.1 The Problem Traditionally in linguistic theory, syntax and semantics provide mechanisms to build the interpretation of a sentence from its parts; although it is non-controversial a sequence of sentences such as found in (1.1) - (1.3) also has an interpretation, the mechanisms which produce it are not defined. (1.1) There is a high degree of stress level from the need to compete and succeed in this ‘me generation’. As a result, people have become more self-centered over time. (1.2) John has finally been rewarded for his great talent. Specifically, he just won a gold medal for mogul-skiing in the Olympics. (1.3) The company interviewed everyone who applied for the position. In this way, they considered all their options. Most discourse theories go beyond sentence level linguistic theory to explain how such sequences are put together to create a discourse interpretation. These theories evoke the notion of abstract discourse relations between discourse units, provide lists of these relations of varying length and organization, and propose discourse models constructed from these relations and units. Some of these models produce compositional accounts of discourse structure and/or interpretation ([Pol96, Ash93, MT88, GS86]; others produce accounts for how relations between units are inferred([Keh95, HSAM93, LA93]. The majority make use of the presence of cue phrases, or 1
  • 19. discourse connectives, treating them as “signals” of the presence of particular discourse relations. In (1.1), for example, the relevant cue phrase is the adverbial as a result, and the discourse relation it signals is frequently classified as a result relation. Along with certain adverbials, the subordinating and coordinating conjunctions are also classified as discourse connectives in these theories.     DLTAG [FMP 01, CFM 02, WJSK03, WKJ99, WJSK99, WJ98] is a theory that bridges the gap between clause-level and discourse-level theories, providing a model of a rich intermediate level between clause structure and high-level discourse structure, namely, the syntax and semantics associated with discourse connectives. In DLTAG, discourse connectives are predicates, akin to verbs at the clause level, except that they take discourse units as arguments. DLTAG proposes to build the interpretation of these predicates directly on top of the clause, using the same syntactic and semantic mechanisms that are already used to build the clause. Based on considerations of computational economy and behavioral evidence, DLTAG argues that both arguments of subordinating and coordinating conjunctions can be represented structurally, but only one argument of adverbial discourse connectives comes structurally; the other argument must be resolved anaphorically. However, while DLTAG has shown that certain adverbials function as discourse connectives, it has not isolated the subset of adverbials which function as discourse connectives from the set of all adverbials. The set of all adverbials is a large set; in fact, it is compositional and therefore infinite[Kno96]. Because it is thus not possible to list all of the adverbials that function as discourse connectives, in this thesis we investigate how semantics and pragmatics cause an adverbial to function as a discourse connective. 1.2 Contributions of the Thesis This thesis extends the DLTAG model, investigating the semantics and pragmatics underlying the behavioral anaphoricity of adverbial discourse connectives. We present a corpus-based analysis of over 13,000 S-modifying adverb (ADVP) and preposition (PP) adverbials in the Penn Treebank Corpus [PT]. We show that certain adverbials, which we call discourse adverbials, can be distinguished semantically from other adverbials, which we call clausal adverbials. Some clausal adverbials from our corpus are shown in (1.4), and some discourse adverbials from our corpus are shown in (1.5). 2
  • 20. (1.4) Probably/In my city/In truth, women take care of the household finances. (1.5) As a result/Specifically/In this way, women take care of the household finances. The most frequently occurring clausal and discourse adverbials have both been classified in the literature as discourse connectives, due to the fact that they seem to be interpretable only with respect to context. In this thesis we will show that while syntax cannot distinguish these two types of adverbials, their predicate argument structure and interpretation shows that only discourse adverbials function semantically as discourse connectives. The syntax and semantics of most discourse adverbials has not been well studied. Generally, only a small subset (those that occur frequently) have been addressed at all. At the clause level these are usually designated as the domain of discourse level research, and at the discourse level the focus is frequently on the discourse relation they “signal”. Our investigation sheds light on the space of relations imparted by a much wider variety of adverbials. In our analysis we draw on clause-level research into the semantics of adverbials and other sub-clausal constituents. We use prior research on discourse deixis to study both the semantic nature of the arguments of adverbials and the syntactic constituents from which they can be derived. We present a wide variety of discourse and clausal adverbials. We show that discourse adverbials function semantically as discourse connectives because they contain semantic arguments that may or may not be syntactically overt, but whose interpretation requires an abstract object interpretation of a contextual constituent. We show that clausal adverbials do not function semantically as discourse connectives because the interpretations of their semantic arguments do not require the abstract object interpretation of a contextual constituent, although they may make anaphoric reference to other contextual interpretations. We further show how the predicate argument structure and interpretation of discourse adverbials can be formalized and incorporated into the syntax of the DLTAG model. It is not only due to their predicate argument structure and interpretation that adverbials have been classified as discourse connectives, however. We encounter in our corpus a number of adverbials that have been treated as discourse connectives despite the fact that their semantics does not require abstract object interpretations in the discourse or spatio-temporal context. We explore other explanations for how these adverbials evoke discourse context during their interpretation; in partic- 3
  • 21. ular, we investigate the interaction of their semantics with other semantic and pragmatic devices. We show how focus effects in S-modifying adverbials contribute to discourse coherence, and we also show how S-modifying adverbials can be used to convey Gricean implicatures. While the semantics and pragmatics discussed here will not provide a complete account of the discourse functions of all adverbials, it will show that the analysis of adverbials can be viewed modularly: certain functions can be attributed to the semantic domain, others to the pragmatic domain, and still others to larger issues of discourse structure. There are numerous benefits of this analysis. First, it is economical, making use of pre-existing clause-level mechanisms to build adverbial semantics at the discourse level, thereby reducing the load on inference to account for discourse interpretation (c.f. [Keh95]). Secondly, it provides a theoretical grounding for [Kno96]’s empirical approach to studying the lexical semantics of discourse connectives, in the process showing that additional adverbials should be included in the class he isolates based on intuition alone, and that some of those included there don’t really belong. Thirdly, it expands an existing model of discourse which argues that discourse structure can be built directly on top of clause structure and thereby bridges the gap between high-level discourse theory and clause-level theory. 1.3 Thesis Outline In this chapter, we have given a brief overview of the analyses that we present in the remainder of this thesis. The rest of this thesis is organized as follows: In Chapter 2 we survey a variety of existing discourse theories and examine the similarities and differences between each theory. We discuss how, taken together, each theory serves to distinguish different modules required to build a complete interpretation of discourse. We then overview DLTAG as another important module that alone out of all the others is capable of bridging the gap between discourse level theories and clause level theories, by treating discourse connectives as predicates and using the same syntax and semantics that builds the clause to build an intermediate level of discourse. In Chapter 3 we investigate the semantic mechanisms that cause some adverbials to function 4
  • 22. as discourse connectives. We discuss prior research into the semantics of adverbials and present an analysis of the S-modifying adverbials in the Penn Treebank corpus that distinguishes those adverbials that function as discourse connectives according to their predicate-argument structure and interpretation. In Chapter 4 we show how the semantics of adverbials discussed in Chapter 3 can be incorporated into a syntax semantic interface for DLTAG. We discuss syntax-semantic interfaces that have been proposed for clause-level grammars and related discourse grammars and show how these interfaces can be extended to DLTAG. We further discuss the DLTAG annotation project whose goal is to annotate the arguments of all discourse connectives, both structural and anaphoric. In Chapter 5 we continue our analysis of how adverbials function as discourse connectives. investigating other ways apart from their predicate argument structure and argument resolution in which an adverbial can be used to contribute to discourse coherence. We conclude in Chapter 6 and discuss directions for future work. 5
  • 23. Chapter 2 Anaphora and Discourse Models 2.1 Introduction Discourse models explain how sequences of utterances are put together to create a text. Building a coherent discourse involves more than just concatenating random utterances; in addition, the contributions of each utterance to the surrounding context must be established. Two major areas of investigation have been distinguished. The first concerns how sub-clausal constituents obtain their meaning through relationships to entities previously evoked in a discourse. Such constituents include NPs, such as in (2.1) where the personal pronoun he refers to an entity mentioned in the prior sentence, (2.2) where the beer refers to one of the elements of the picnic in the prior sentence, and (2.3) where the demonstrative pronoun that refers to the interpretation of the prior sentence. (2.1) Bill talked to Phillip. He got really upset. (2.2) Bill and Mary took a picnic to the park. The beer was warm. (2.3) Bill talked to Phillip. That made me mad. Other examples include VPs, such as in (2.4) where the elided VP ( ) must be determined from q the meaning of the prior sentence, and in (2.5) where the use of simple past tense in both sentences creates an impression of forward progression in time. q (2.4) Bill talked to Phillip. I did too. (2.5) Bill entered the room. He began to talk. 6
  • 24. The second major area of investigation concerns how clausal (and super-clausal) constituents obtain their meaning through relationships to clausal constituents in the surrounding context. To illustrate the nature of these investigations, consider the discourse in (2.6). (2.6 a) Last summer, the Keatings traveled in Zimbabwe. (2.6 b) Pat studied flora in the Chimanimani mountains. In the absence of any additional context, one reader might interpret (2.6), and/or the writer’s intention in producing (2.6), as a description of what the Keatings on the one hand, and Pat on the other, did the prior summer. Another reader might interpret it as contrasting what the two participants did the prior summer, e.g. the Keatings (just) traveled, whereas Pat studied. Interactions between these two areas of investigation have also been studied. For example, suppose (2.6) is preceded and followed by other sentences, as in (2.7). (2.7 a) Pat Keating married Maria Lopez last spring. (2.7 b) Last summer, the Keatings traveled in Zimbabwe. (2.7 c) Pat studied flora in the Chimanimani mountains. (2.7 d) That was a spectacular celebration. Due to addition of (a), the reader will likely determine that Pat is a member of the Keating family. S/he might thus interpret Pat’s studying as an elaboration of, or even as a cause of, the Keatings’ traveling, or s/he might simply interpret Pat’s studying as occurring after the Keatings’ traveling. World knowledge or inference may yield the belief that the Chimanimani mountains are located in Zimbabwe. Note that the demonstrative reference in (d) is hard to resolve to the marriage described in (a) unless we move it to a position immediately following (a) in the discourse. A complete model of discourse must account for all of these relationships, and their interactions. In particular, a discourse model must characterize: r the properties of the constituents that are being related r the type of relationships that can exist between these constituents r the mechanisms underlying these relationships 7
  • 25. r the constraints on the application of these mechanisms In the following sections we will survey a variety of existing discourse models in terms of their coverage of the above characterizations. By taking a roughly chronological approach, and examining the benefits and limitations of each subsequent model in terms of how it incorporates those prior to it, these characterizations will be fleshed out, and it will be shown that, taken together, each theory serves to distinguish different modules required to build a complete interpretation of discourse. We then introduce DLTAG as an important module capable of bridging the gap between discourse level theories and clause level theories. 2.2 2.2.1 Descriptive Theories of Discourse Coherence An Early Encompassing Description [HH76] early proposed that a single underlying factor, which they call cohesion, unifies sequences of sentences to create a discourse. Cohesion is defined as the “semantic relations between successive linguistic devices in a text, whereby the interpretation of one presupposes the interpretation of the other in the sense that it cannot be effectively decoded except by recourse to it”([HH76] p.4)1 . They distinguish five classes of cohesion, shown in Figure 2.1. Figure 2.1: [HH76]’s Types of Cohesion Reference is a semantic relation achieved by the use of a cataphoric or anaphoric reference item to signal that the appropriate instantial meaning be supplied. Personal reference (signaled by per1 This use of the term “presupposition” is not equivalent to semantic presupposition; the latter depends on truth valuation and the former does not. Both [HH76] and [Sil76] define “discourse”, or “pragmatic”, presupposition as the relationship of a linguistic form to its prior context; Silverstein adds that a pragmatic presupposition is what a language user must know about the context of use of a linguistic signal in order to interpret it [Sil76, 1]. See Chapter 5 for further discussion of presupposition. 8
  • 26. sonal pronouns and determiners, e.g. I, my), demonstrative reference (signaled by demonstratives, e.g. this, that), and comparative reference (signaled by certain nominal modifiers, e.g. same, and verbal adjuncts, e.g. identically) are distinguished, and exemplified in italics in (2.8). (2.8) John saw a black cat, but that doesn’t mean it was the same black cat he saw before. Lexical cohesion is a semantic relation achieved by the successive use of vocabulary items referring to the same entity or event, including definite descriptions, repetitions, synonyms, superordinates, general nouns, and collocation. Every lexical item can be lexically cohesive; this function is established by reference to the text. In [HH76]’s example, shown in (2.9), there are definite descriptions: a pie...the pie, repetitions: pie...pie, general nouns and synonyms: a pie...a dainty dish and super-ordinates: blackbirds...birds. Sing a song of sixpence, a pocket full of rye, (2.9) Four-and-twenty blackbirds baked in a pie, When the pie was opened, the bird began to sing, Wasn’t that a dainty dish to set before a king? Substitution and Ellipsis are grammatical relations, which can be nominal, verbal, or clausal. The substitute must be of the same grammatical class as the item for which it substitutes, and ellipsis is substitution by zero([HH76, 89]). In (2.10), nominal one is a substitute, and there is ellipsis of the embedded predicate in the final clause. (2.10) Mary covets two things. Her money will be the first one to leave her. Her husband will be the next 0. Conjunction is a semantic relation usually achieved by the use of conjunctive elements, whose meaning presupposes the presence of other propositions in the discourse and specifies the way they connect to the proposition that follows. Italicized examples are shown in (2.11). [HH76] distinguish four main types of relations between propositions, shown in Table 2.1. These relations are further subdivided, and an orthogonal distinction is made between external and internal relations; the former hold between elements in the world (referred to in the text), and the latter between text elements themselves, such as speech acts. 9
  • 27. (2.11) Because it snowed heavily, the battle was not fought, so the soldiers went home. Table 2.1: Main Categories of [HH76]’s Relations between Propositions ADDITIVE complex apposition comparison 2.2.2 ADVERSATIVE contrastive correction dismissal CAUSAL specific conditional respective TEMPORAL sequential simultaneous conclusive correlative Alternative Descriptions of Propositional Relations In comparison to [HH76], [Lon83]’s study of discourse coherence distinguishes between predications expressed by clauses, which he models with predicate calculus, and relations on the predications expressed by clauses, which he characterizes into two main types, shown in Table 2.2: the “basic” operations of propositional calculus, supplemented by temporal relations, and a set of elaborative relations. These relations are further subdivided, and an orthogonal distinction is made between non-frustrated and frustrated relations, the latter being the case when an expected relation is not satisfied by the assertions in the text. Unlike [HH76], [Lon83] does not emphasize a correlation between these relations and surface signals in the text; rather, they are meant to categorize the “deep” relations underlying the surface structure of discourse. Table 2.2: Main Categories of [Lon83]’s Relations between Propositions BASIC conjoining ( ) alternation ( ) implication ( ) temporal ELABORATIVE paraphrase illustration deixis attribution u s t More recently, [Mar92] has proposed an alternative set of relations between propositions, shown in Table 2.3, in which four main types are distinguished. These relations are further subdivided, and orthogonal distinctions are made between internal and external relations, and paratactic, hypotactic, and neutral relations. The first dimension is taken from [HH76], and the latter dimension roughly 10
  • 28. corresponds to coordinating, subordinating, and variably coordinating and subordinating relations, respectively. Like [HH76], Martin uses explicit signals to derive his set of discourse relations, but like [Lon83], he defends the claim that they represent “deep” relations underlying the surface structure. He combines the two approaches by using an insertion test: a “deep” relation exists at a place in the text if an explicit signal can be inserted there. Nevertheless, his set is different from both [HH76] and [Lon83]. Table 2.3: Main Categories of [Mar92]’s Relations between Propositions ADDITIVE addition alternation COMPARATIVE similarity contrast TEMPORAL simultaneous successive CONSEQUENTIAL purpose concession condition manner consequence [SSN93] take a psychological approach, identifying the basic cognitive resources underlying the production of discourse relations. Four cognitive primitives are identified, according to which discourse relations can be classified, which they exemplify using explicit cue phrases. [SSN93] cite a number of psychological experiments to support these features. r basic operation: Each relation creates either an additive (and) or a causal (because) connection between the related constituents. r source of coherence: Each relation creates either semantic or pragmatic coherence; in the first case the propositional content of the constituents is related, in the second case the illocutionary force of the constituents is related. r order of segments: Causal relations may have the causing segment to the left or the right of the result. polarity: A relation is negative if it links the content of one segment to the negation of the r content of the other segment (although), and positive otherwise. [Hob90] takes a computational approach, identifying relations between propositions according to the kind of inference that is required to identify them. The main categories are shown in Table 2.4. 11
  • 29. Respectively, these categories distinguish inference about causality between events in the world, inference about the speaker’s goals, inference about what the hearer already knows, and inference that a hearer is expected to be able to make about relationships between objects and predicates in the world. [Hob90] suggests that inference should be viewed as a recursive mechanism; when two propositions are linked by a relation, they form a unit which itself can be related to other units, thereby building an interpretation of the discourse as a whole. Table 2.4: Main Categories of [Hob90]’s Relations between Propositions Occasion cause enablement 2.2.3 Evaluation Ground-Figure background explanation Explanation parallel generalization exemplification contrast Discourse Relations as Constraints [Keh95] reformulates [Hob90]’s relations between propositions into three main types of more general “discourse relations” : Contiguity, Cause-Effect, and Resemblance, which he defines in terms of constraints on both clausal and sub-clausal properties of discourse units S and S . He then spec- w v ifies how an inference mechanism can be used to derive Cause-Effect and Resemblance relations, and shows how they interact with sub-clausal coherence. Like [HH76] and [Mar92], he correlates these relations with the presence of cue phrases, suggesting that they could be treated as bearing semantic features that interact with the discourse inference process. Narration is the only Contiguity relation Kehler defines. Exemplified in (2.12), the constraint on its derivation is that a change of state for a system of entities from S be inferred, where the w initial state for this system is inferred from S . v (2.12) Bill picked up the speech. He began to read. Kehler notes that the full set of constraints governing the recognition of a Narration relation are not well understood, but he refutes ([HH76, Lon83])’s treatments, which equate it with temporal progression, citing [Hob90]’s example (2.13), whose interpretation requires the additional inference that Bush is on the train, or that the train arrival is somehow relevant to him. 12
  • 30. (2.13) At 5:00 a train arrived in Chicago. At 6:00 George Bush held a press conference. Kehler distinguishes four types of Cause-Effect relations, all of which must satisfy the constraint that a presupposed path of implication be inferred between a proposition P from S , and a v proposition Q from S . Each type and the implication it requires is shown in Table 2.5, along with w correlated cue phrases. Table 2.5: [Keh95]’s Cause-Effect Relations Relation Result Explanation Violated Expectation Denial of Preventer Presupposition P Q Q P P Q Q P Conjunctions as a result, therefore, and because but despite, even though u xyu x yu u To take two examples, a Result relation is inferred when Q is recognized as normally following from P. In (2.14), being a politician normally implies being dishonest. (2.14) Bill is a politician, and therefore he’s dishonest. Denial of Preventer relations are inferred when P is recognized as normally following from x Q (example (2.15)). (2.15) Bill is honest even though he’s a politician. Kehler distinguishes six types of Resemblance relations, all having the constraint that a common or contrasting relation be inferred between S and S , such that subsumes p and p , where w € v w € v p applies over a set of entities a ,...a from S , and p applies over a set of entities b ,...b from w w v w v S . Certain Resemblance relations also have the constraint that a property vector be inferred, such w consists of common or contrasting properties q , which hold for a and b , for all C  % % %  that 2. Table 2.6 provides the constraints for each Resemblance relation and its correlated cue phrase. For example, Exemplification holds between a general statement followed by an example of the generalization. In (2.16), a and b correspond to the meanings of young aspiring politicians w Kehler notes that Elaboration relations are a limiting case of Parallel relations, where the similar entities a and b are identical. ‚ 13 ‚ w 2
  • 31. ; while p and p correspond to the meanings of support and campaign for respectively3 . w v ‡ †„ ˆU…ƒ and Generalization is identical to Exemplification, except that the order of the clauses is reversed. (2.16) Young aspiring politicians often support their party’s presidential candidate. For instance, John campaigned hard for Clinton in 1992. Table 2.6: [Keh95]’s Resemblance Relations Constraints p =p ,a =b = p , b a or b a = p , a b or a b = p , q (a ) and q (b ) = p , q (a ) and q (b ) = p , q (a ) and q (b ) % % x % % % % %  %  …% % % v % % w % % w x v v % % w v % ‰ % w v ‰ …% w v % w p p p p p % Relation Elaboration Exemplification Generalization Parallel Contrast (i) Contrast (ii) Conjunctions in other words for example in general and but but Parallel relations require the relations expressed by the sentence and the corresponding entities to be recognized as sharing a common property. In (2.17), p and p correspond to the meanings w € v of organized rallies for and distributed pamphlets for respectively; corresponds to the meaning of do something to support. a and b correspond to the meanings of John and Bill, which share w w the common property q that they are people relevant to the conversation. Contrast relations re- w quire either the relations expressed by the sentences (example (2.18)) or the corresponding entities (example (2.19)) to be recognized as contrasting. (2.17) John organized rallies for Clinton, and Fred distributed pamphlets for him. (2.18) John supported Clinton, but Mary opposed him. (2.19) John supported Clinton, but Mary supported Bush. 2.2.4 Abducing Discourse Relations by Applying the Constraints Kehler’s constraints are formulated in terms of two operations from artificial intelligence: (1) identifying common ancestors of sets of objects with respect to a semantic hierarchy (Resemblance ’ ‘ ’ 14 is equatable with p , and p can be recognized as a “ 3 Although not discussed by Kehler, the subsuming property member of p .
  • 32. relations), and (2) computing implication relationships with respect to a knowledge base (CauseEffects relations). Kehler distinguishes two steps in the discourse inference process: (a) Identify and retrieve the arguments to the discourse relation This step is achieved via the sentence interpretation; Kehler uses a formalism related to the version of Categorial Semantics described in [Per90], in which sentence interpretation results in a syntactic structure annotated with the semantic representation of each constituent. These semantic representations are arguments to the discourse relation, and are identified and retrieved via their corresponding syntactic nodes. Cause-Effect relations require only the identification of the sententiallevel semantics for the clauses as a whole (i.e. P and Q). Resemblance relations require that the semantics of sub-clausal constituents be accessed, in order to identify p and p , and a and b . % % w v (b) Apply the constraints of the relation to those arguments The second step, Kehler suggests, could be achieved for Resemblance relations using comparison and generalization operations such as proposed in [Hob90] and elsewhere, while [HSAM93]’s logical abduction interpretation method could be used to abduce the presupposition for the CauseEffect relations. [HSAM93]’s method could further determine with what degree of plausibility the constraints are satisfied such that a particular relation holds. In [HSAM93]’s framework, discourse relations between discourse units are proved (abduced) using world and domain knowledge, via a procedure of axiom application. Each discourse unit is a , as defined by axiom (2.20), where if is a sentence containing a string of words, , and d @ ˜ ‡F • ”F ™—–@ is its assertion or topic, then it is a discourse segment. f F (2.20) ( w, e)s(w, e) Segment(w, e) e When a discourse relation holds between two segments, the resulting structure is also a segment, yielding a hierarchical discourse structure, as captured by axiom (2.21), where if w and w are g w segments whose assertion or topic are respectively e and e , and a discourse (coherence) relation g w holds between the content of w and w , then the string w w is also a segment. The argument of F g w g w CoherenceRel is the assertion or topic of the composed segment, as determined by the definition of the discourse relation. 15
  • 33. CoherenceRel(e , e , e) g w Segment(w , e ) f t g (2.21) ( w , w , e , e , e) Segment(w , e ) g t w g w g w w e Segment(w w , e) g w To interpret a discourse W, therefore, one must prove the expression: (2.22) ( e)Segment(W, e) h We use as an example a variant of that found in [Keh95]: (2.23) John is dishonest. He’s a politician. To interpret this discourse, it must be proven a segment, by establishing the three premises in axiom (2.21). The first two premises are established by (2.20), it therefore remains to establish a discourse relation. Because Explanation is a defined discourse relation, we have the following axiom: CoherenceRel(e , e , e ) g w w (2.24) ( e , e )Explanation(e , e ) f g g w w e In explanations, Hobbs notes, it is the first segment that is explained; therefore it is the dominant segment and its assertion, e , will be the assertion of the composed segment, i.e. the third argument w of CoherenceRel in (2.24). P be (e , e ) must be abduced, as expressed by g w F@ k j 6GVi presupposed; in Hobbs’ terms, the presupposition u Recall that the constraints defined by Kehler on Explanation relations were that Q the following axiom: Explanation(e , e ) g w ( e , e )cause(e , e ) f g g w w e In other words, to abduce an Explanation relation, what is asserted by e must proven to be the g cause of e . In [HSAM93], utterances, like discourse relations, are interpreted by abducing their w logical form, using axioms that are already in the knowledge base, are derivable from axioms in the knowledge base, or can be assumed at a cost corresponding to some measure of plausibility. Assume we have abduced the following axiom: cause(e , e ) w t ( e )Dishonest(e , x) g g g h f (2.25) ( x, e )Politician(e , x) w e w That is, if e is a state of x being a politician, then that will cause the state e of x being dishonest. g w The plausibility measure that is assigned to this formula will be inversely proportionate to the cost 16
  • 34. assigned to an Explanation relation. Assuming (2.25) has a high plausibility in our knowledge base, then in the logical forms of the two sentences in (2.23), John (and he) can be identified with x, cause(e , e ) proven thereby, and Explanation will be viewed as the likely relation between the two g w sentences4 . 2.2.5 Interaction of Discourse Inference and VP Ellipsis [Keh95] shows how the discourse inference process for Resemblance relations interacts differently than the discourse inference process for Cause-Effect relations with VP ellipsis, based on the different constraints they require to be satisfied by the clauses they are inferred between. In particular, the arguments to Resemblance relations are sets of parallel entities and relations. Therefore, the discourse inference process must access sub-clausal constituents in identifying and retrieving those arguments, including the missing constituent in VP ellipsis. In contrast, the arguments to CauseEffect relations are propositions. Therefore the inference process need not access sub-clausal constituents. This difference accounts for different felicity judgments concerning VP ellipsis displayed across the two types of relations. To exemplify his analysis, consider example (2.26), in which a Parallel relation can be inferred between the two clauses: (2.26) Bill became upset, and Hillary did too. To establish a Parallel relation (see the Resemblance Relation definitions in Table 2.6), p(a ,a ...) g w must be inferred from S , and p(b ,b ...) must be inferred from S , where for some property vector m g w l , q (a ) and q (b ), for all . The identification of these arguments requires the elided material to be C % % % % E ‡ …–j  recovered reconstructed in the elided VP (see [Keh95] for details of the process of reconstruc- tion). Compare (2.26), however, with (2.27). (2.27) *The problem was looked into by Billy, and Hillary did too. Again, to establish a Parallel relation between the two clauses, the arguments must be identified, requiring the elided material to be recovered and reconstructed in the elided VP. But in this case the 4 This explanation is from [Keh95, 18] and [Lag98]. See [HSAM93] for further details 17
  • 35. recovery of was looked into creates a mismatch of syntactic form when it is reconstructed in the elided VP, resulting in an infelicitous discourse. Such infelicity does not occur, however, when there is a Cause-Effect relation between two similar clauses, as in example (2.28): (2.28) The problem was to have been looked into, but obviously nobody did. Kehler argues that because establishing a Violation of Expectation relation (see the Cause-Effect Relation definitions in Table 2.5) requires only that a proposition P be inferred from S , and a propo- l x nu sition Q be inferred from S (where normally P Q), the elided VP need not be reconstructed m in the syntax, but can be recovered through anaphora resolution. The result is that the discourse is felicitous5 . 2.2.6 Summary In this section, we have seen the early delineation of different types of coherence proposed by Halliday and Hasan reflected in subsequent theories of discourse coherence, which we will see further below. The comparison of the set of propositional relations proposed by Halliday and Hasan with those proposed in other descriptive theories highlights the lack of agreement in the literature about how an important aspect of discourse coherence should be described. As we will continue to see in the following sections, though most models make use of explicit signals to characterize discourse relations, there still exists considerable variation in the number and type of discourse relations each model defines. What distinguishes each model is the degree and manner with which they associate their postulated set of discourse relations to mechanisms that produce them and how they constraint the application of these mechanisms. Kehler’s attempt to define relations between discourse units in terms of constraints which those units must satisfy, to demonstrate how their satisfaction can be determined using the logical abduction method of Hobbs et al., and to show how this satisfaction interacts with sub-clausal coherence, is a first exemplification of such an association. We will see others below, and in the final section we will see a way in which these various approaches can be 5 Kehler does not address the fact that (2.26) is infelicitous with a Cause-Effect relation, e.g. The problem was looked into by Billy, but Hillary didn’t. 18
  • 36. simplified. 2.3 2.3.1 A Three-Tiered Model of Discourse The Three Tiers [GS86] present a theory of discourse that distinguishes three interacting components: the linguistic structure, the intentional structure, and the attentional state. The linguistic structure represents the structure of sequences of utterances, i.e. the structure of segments into which utterances aggregate. This structure is not strictly compositional, because a segment may consist of embedded subsegments as well as utterances not in those subsegments. This structure is viewed as akin to the syntactic structure of individual sentences ([GS86], footnote 1), although the boundaries of discourse segments are harder to distinguish6 . The intentional structure represents the structure of purposes, (DSPs), i.e. the functions of each discourse segment, whose fulfillment leads to the fulfillment of an overall discourse purpose (DP). DPs and DSPs are distinguished from other intentions by the fact that they are intended to be recognized. Non-DP/DSP intentions, such as a speaker’s intention to use certain words, or impress or teach the hearer, are private, i.e. not intended to contribute to discourse interpretation. Examples of DPs and DSPs include intending the hearer to perform some action, intending the hearer to believe some fact, intending the hearer identify some object or property of an object. As these examples imply, the set of intentions that can serve as DSPs and DPs is infinite, although it remains an open question of whether there is a finite description of this set. However, [GS86] argue that there are only two structural relations which can hold between DSPs and their corresponding discourse segments. If the fulfillment of a DSP A provides partial fulfillment of a DSP B, then B dominates A. If a DSP A must be fulfilled before a DSP B, then A satisfaction-precedes B. Because a hearer cannot know the whole set of intentions that might serve as DSPs, what they recognize, [GS86] argue, is the relevant structural relations between them. The attentional state is viewed as a component of the cognitive state, which also includes the 6 See [GS86, FM02] for references to studies investigating these boundaries. 19
  • 37. knowledge, beliefs, desires and private intentions of the speaker and hearers. The attentional state is inherently dynamic, and is modeled by a stack of focus spaces, each consisting of the objects, properties and relations that are salient in each DSP, as well as the DSP itself. Changes in the attentional state arise through the recognition of the structural relations between DSPs. In general, when the DSP for a new discourse segment contributes to the DSP for the immediately preceding segment, it will be pushed onto the stack; when the new DSP contributes to some intention higher in the dominance hierarchy, several focus spaces are popped from the stack before the new one is pushed. One role of the stack is to constrain the possible DSPs considered as candidates for structural relations with the incoming DSP; only DSPs in the stack and in one of the two structural relations are available. Another role of the stack is to constrain the hearer’s search for possible referents of referring expressions in an incoming utterance; the focus space containing the utterance will provide the most salient referents. Figure 2.2 illustrates the major aspects of the model. Figure 2.2: Illustration of [GS86]’s Discourse Model In the left of the figure, a sequence of five utterances is divided into DSs, where DS1 includes both DS2 and DS3, as well as Utterance1 and Utterance5, which are not included in either DS2 or DS3. As shown in (a), the focus space FS1 containing DSP1 and the objects, properties and relations so far identified in DS1 is pushed on the stack. Because DSP1 is identified as dominating DSP2, FS2 is also pushed onto the stack. In (b), DSP2 is identified as being in a satisfaction-precedes relationship with DSP3; FS2 is thus popped from the stack before FS3 is pushed onto the stack. [GS86] argue that the hearer makes use of three pieces of information when determining the 20
  • 38. segments, their DSPs, and the structural relationships between them. First, linguistic expressions, including cue phrases and referring expressions as well as intonation and changes in tense and aspect, are viewed as primary indicators of discourse structure, even as the attentional structure constrains their interpretations. [GS86] argue that while linguistic expressions cannot indicate what intention is entering into focus, they can provide partial information about changes in attentional states, whether this change returns to a previous focus space or creates a new one, how the intention in the containing discourse segment is related to other intentions, and structural relations between segments. They exemplify such uses of linguistic expression as shown in Table 2.7. Table 2.7: [GS86] Changes in Discourse Structure Indicated by Linguistic Expressions Attentional Change True Interruption Flashbacks Digressions Satisfaction-precedes New Dominance (push) (pop to) (complete) now, next, that reminds me, and, but anyway, but anyway, in any case, now back to the end, ok, fine, paragraph break I must interrupt, excuse me Oops, I forgot By the way, incidentally, speaking of Did you hear about..., that reminds me in the first place, first, second, finally moreover, furthermore for example, to wit, first, second, and moreover, furthermore, therefore, finally Second, the hearer makes use of the utterance-level intentions of each utterance [Gri89] to determine the DSP of each discourse segment. The DSP may be identical to some utterance-level intention in a segment, as in a rhetorical question, whose intention is to cause the hearer to believe the proposition conveyed in the question. Alternatively, the DSP may be some combination of the utterance-level intentions, as in a set of instructions, where the intention of the speaker is that all of them be completed. Third, shared knowledge between the speaker and hearer about the objects and actions in the stack can help determine the structural relations between utterances and the intentions underlying them. [GS86] propose two relationships concerning objects and actions that a hearer uses. A supports relation holding between propositions may indicate dominance in one direction, while 21
  • 39. a generates relation holding between propositions may indicate dominance in another direction. They leave as an open question how these relations between objects are computed, but view them as more basic versions of the possible relations between propositions proposed by [HH76] and others. Together, this information enable a hearer to reason out the DSPs and DP in a discourse. 2.3.2 Coherence within Discourse Segments Within each discourse segment, Centering Theory (CT) [WJP81] is a model of sub-clausal discourse coherence which tracks to the movement of entities through each focus state by one of four possible focus shifts. In CT, each discourse segment consists of utterances designated as U . Each utterance % in U is the backward-looking center, Cb. The highest-ranked E F sCA j F –5rqpo ) that is % ˜F @ w D% h % entity in Cf(U of discourse entities, the forward-looking centers, Cb(U ). The highest-ranked % U evokes a entity in Cf(U ) is the preferred center, Cp. The realize relation is defined in [WJP81] as follows: % is an element of the situation described by U, or i if i i As utterance U realizes a center is the semantic interpretation of some subpart of U. Ranking of the members of the Cf list is language-specific; in English the ranking is as follows: t Indirect Object Direct Object Other t t Subject Four types of transitions are defined to reflect variations in the degree of topic continuity and are computed according to Table 2.8: Table 2.8: Centering Theory Transitions uv % ) Cb(U ) Cb(U Smooth-Shift Rough-Shift w D% h w D% h % Cb(U ) = Cp(U ) Cb(U ) Cp(U ) Cb(U ) = Cb(U Continue Retain ) % % % uv % Discourse coherence is then computed according to the following transition ordering rule: Continue is preferred to Retain, which is preferred to Smooth-Shift, which is preferred to Rough Shift. CT models discourse processing factors that explain the difference in the perceived coherence of discourses such as (2.29) and (2.30). 22
  • 40. (2.29a) Jeff helped Dick wash the car. (2.29b) He washed the windows as Dick waxed the car. (2.29c) He soaped a pane. (2.30a) Jeff helped Dick wash the car. (2.30b) He washed the windows as Dick waxed the car. (2.30c) He buffed the hood. CT predicts that (2.30) is harder to process than (2.29), because though initially in both discourses the entity realized by Jeff is established as the Cb, utterance (2.30c) causes a Smooth-Shift in which the Cb becomes the entity realized by Dick, because the verb buffing is a subset of the waxing event. The predicted preference for a Continue (which actually occurs in (2.29c)) means that the 2.3.3 w D% h hearer first interprets the pronouns he in (2.30c) as the Cp(U ) and then revises this interpretation. Modeling Linguistic Structure and Attentional State as a Tree [Web91] argues that a tree structure and insertion algorithm can serve as a formal analogue of both on-line recognition of discourse structure and changes in attention state, thereby removing the need to postulate a separate stack for focus spaces, while retaining the distinction between text structure, intentional structure, and attentional state. Webber’s model assumes a one-to-one mapping between discourse segments and tree nodes, with a clause constituting the minimal unit. In this way the linguistic structure is represented compositionally. Each node in the tree is associated with the entities, properties and relations conveyed by the discourse segment it represents. When the information in a new clause C is to be incorporated into an existing discourse segment DS, C is incorporated into the tree by the operation of attachment, which adds the C node as a child of the DS node, and adds the information conveyed by C to the DS node. This operation is illustrated in Figure 2.3. (a) shows the tree before node 3 is attached, while (b) shows the tree after node 3 is attached. Note that the information associated with node 3 is represented in node 3 and incorporated into the discourse segment (1,2,3) it has attached to. When the information in a new clause C is combined with the information in an existing discourse segment to compose a new discourse segment DS, C is incorporated into the tree by the 23
  • 41. Figure 2.3: Illustration of [Web91]’s Attachment Operation operation of adjunction, which makes C and DS the children of a new node, and adds the information conveyed by C and DS to the new node. This operation is illustrated in Figure 2.4. (a) shows the tree before node 3 is adjoined, while (b) shows the tree after node 3 is adjoined. Note that the information associated with node 3 is incorporated along with the information associated with node (1,2) (which was also created by adjunction) into the new node (1,2),3). Figure 2.4: Illustration of [Web91]’s Adjunction Operation Both of these operations are restricted to applying to nodes on the right frontier of the discourse tree. Formally, the right frontier is the smallest set of nodes containing the root such that whenever a node is in the right frontier, so is its rightmost child. In this way, the tree nodes appear in the same linear order as the corresponding segments in the text. In Webber’s model, the tree replaces [GS86]’s linguistic structure, and the right frontier replaces [GS86]’s attentional state, i.e. the information in each node on the right frontier represents the information in each focus space in the stack. Because the model is strictly compositional, not 24
  • 42. all nodes (discourse segments) in the tree will contain discourse segment purposes (DSPs) (e.g. Utterance1 and Utterance5 in Figure 2.2); however, all nodes on the right frontier except possibly the leaf will contain DSPs that contribute to the DP of the overall discourse (which will be contained in the root of the tree.) 2.3.4 Introduction to Discourse Deictic Reference [Lak74] first used the term“discourse deixis” to refer to uses of the demonstrative like those in (2.31) - (2.33), where the antecedent of the demonstrative can be the interpretation of a verbal predicates (2.31), the interpretation of a clause (2.32), or the interpretations of more than one clause (2.33). (2.31) John [smiled]. He does that often. (2.32) [John took Biology 101.] That means he can take Biology 102. (2.33) [I woke up and brushed my teeth. I went downstairs and ate breakfast, and then I went to work.] That’s all I did today. Early studies of this phenomena relate it to another use of demonstratives shown in (2.34), where the antecedent is not in the discourse at all, but rather in the spatio-temporal situation. This use is called “deictic”, a Greek term meaning ‘pointing’ or ‘indicating’. (2.34) “Aw, that’s nice, Billy!”, you exclaim, when your two-year old kisses you. In [Lyo77]’s view, discourse deixis achieves higher-order reference, where first-order reference is defined as reference to NPs, and higher-order reference is defined as reference to larger constituents interpreted as events, propositions and concepts. [Web91] distinguishes five discourse deixis interpretations, shown in Table 2.9, and exemplified in the second column, where for illustrative purposes the discourse deictic should be assumed to refer to an interpretation of the clause “John talks loudly”. Demonstratives are most commonly employed in English for discourse deixis purposes. Corpus studies, however, have shown the zero-pronoun used in Italian [DiE89] and German [Eck98], and occasionally in English speech. [Sch85] studies roughly 2000 tokens of it and that, and finds that it is much less frequently 25
  • 43. Table 2.9: [Web91]’s Classification of Discourse Deictic Reference Interpretation hline speech act proposition event pure textual description Example that’s a lie that’s true that happened yesterday repeat that that’s a good description used than that as a discourse deictic, and that when uses of discourse deictic it do occur, they are frequently used after a discourse deictic use of that, in what Schiffman calls a “Pronoun Chain”. A similar observation is made by [Web88]. [GHZ93] note more generally the tendency for it to prefer reference to focused items, while demonstrative pronouns prefer reference to activated items. For example, in (2.35), Both uses of it refer to “becoming a street person”; by the second reference, this property is focused. that prefers referring to “becoming a street person would hurt his mother”, which is not yet focused, and is highly dispreferred as the referent for the second it. (2.35) John thought about becoming a street person. It would hurt his mother and it/that would make his father furious. The oft-cited example in (2.36) shows what [GC00] and [Byr00] relatedly claim, that personal pronouns tend to refer to entities denoted by noun phrases, while demonstratives tend to refer discourse deictically. In (2.36), the referent of it is clearly “x”, while the referent of that is clearly the result of “add x to y”. (2.36) Add x to y and then add it/that to z. The preference of it to refer to entities denoted by noun phrases and to refer to abstract objects only after they are referred to by a demonstrative suggests that nouns are more salient than verbs and clauses as entities. [Byr00] however, notes that the salience effects on personal pronoun resolution can be affected by what she calls “Semantic Enhancement”: with enough predicate information geared toward a higher order referent, personal pronouns can made to prefer higher order referents, 26
  • 44. as shown in (2.38c). (2.37) There was a snake on my desk. (2.38a) It scared me. (2.38b) That scared me. (2.38c) I never thought it would happen to me. (Sem. Enh) [Eck98] notes a further difference between the resolution of demonstratives and personal pronouns as discourse deixis, which may indicate that topics are more salient than verbs and clauses as entities. In (2.39), that prefers reference to the specific story described by Speaker A, while it prefers reference to the topic of child-care in general7 . In fact, [ES99] does not consider this use of it a discourse deictic use at all; they treat it as a “vague pronoun”. Speaker A: She has a private baby-sitter. And, uh, the baby just screams. I mean, the baby is like seventeen months and she just screams. (2.39) Even if she knows that they’re getting ready to go out. They haven’t even left yet... Speaker B: Yeah, it/that’s hard. [Lad66] and others note subtle salience differences between the discourse deictic uses of this and that, related to their spatio-temporal differences: this is used when the referent is close, and that is used when the referent is far. 2.3.5 Retrieving Antecedents of Discourse Deixis from the Tree Many researchers find that discourse deictic reference is dependent on discourse structure. [Pas91] uses (2.40) to show that the clausal referent of a discourse deictic is only available if it immediately precedes the deictic. In (d), that cannot refer to sentence (a) unless (b) and (c) are removed. (2.40a) Carol insists on sewing her dresses form all natural materials (2.40b) and she won’t even consider synthetic lining. (2.40c) She should try the new rayon challis. (2.40d) *That’s because she’s allergic to synthetics. 7 [GC00] also claim that prosody plays a role in resolving discourse deictic that more than it. 27
  • 45. [Web91] argues more formally that though deictic reference is often ambiguous (or underspecified [Pas91]), the referent is restricted to the right frontier of the growing discourse tree. She exemplifies this using (2.41)-(2.42): (2.41a) It’s always been presumed that (2.41b) when the glaciers receded (2.41c) the area got very hot. (2.41d) The Folsum men couldn’t adapt, and (2.41e) they died out. (2.42) That’s what’s supposed to have happened. It’s the textbook dogma. But it’s wrong. The discourse deictic reference in (2.42) is ambiguous; it can refer to any of the nodes on the right frontier of the discourse: (the nodes associated with) clause (2.41e), clauses (2.41d)-(2.41e), clauses (2.41c)-(2.41e), clauses (2.41a)-(2.41e). Discourse deictic ambiguity extends to within the clause as well [Sch85], [Sto94]. For example, in (2.43), the referent of that could be any of the bracketed elements: (2.43 a) [ It talks about [ how to [ go about [ interviewing ]]] (2.43b) and that’s going to be important. As noted by [DH95], the standard view on anaphoric processing is that we “pick up” the interpretation of the antecedent, and that in the normal case, there is a coreference relation between the antecedent and the anaphor. The coreference relation is one of identity, and the antecedent is “there”, waiting to be “picked up”. Thus, in (2.44), my grandfather is said to be coreferent to he: (2.44) My grandfather was not a religious person. He even claimed there was no god. However, the fact that the interpretations of discourse deixis are not grammaticalized as nouns prior to discourse deictic reference, and the fact that there are structural restrictions on their reference, leads some researchers to argue that they are not present as entities in the discourse model prior to discourse deictic reference. According to these researchers, their entity reading is coerced and added to the discourse model via discourse deictic reference. 28
  • 46. Type coercion is a term taken from computer science, where it defines an operation by which an expression which is normally of one logical type is re-interpreted as another (e.g. when an integer is understood as a Boolean value). Type coercion is used to explain a range of linguistic phenomena, such as when an expression which is indeterminate as to logical type is ’coerced’ into one particular interpretation and thus acquires a fixed type. Models of how coercion is achieved vary. [Web91] argues that deictic use is an ostensive act, that distinguishes what is pointed to and what is referred to, which may be the same, but need not be. This ostensive act, functions to reify, or bring into the set of entities, some part of the interpretations of clauses which were not present in the set of entities prior to the ostensive act. She uses referring functions8 to model how the reification is achieved, because they allow the domain of what is pointed to (demonstratum) to be distinguished from the range of what is referred to (referent): u f: D R , where D is comprised of focused regions of the discourse, and R is a set of possible interpretations. In (2.41), the domain of the referring functions are the elements at the right frontier of the discourse, and function application yields a range of event tokens (things that can happen). By virtue of the referring action of the function, these new ‘entities’ (event tokens) are added to E. [Sto94] takes Webber’s model one step further, arguing that a discourse deictic pronoun will take its referent from the rightmost sibling of the clause in which it is contained, once its clause is attached or adjoined to the tree. That referent cannot be found in a node that dominates the node containing the discourse deictic is easy to see, because that would make the deictic self-referential, as in (2.45), where the indice indicates the discourse segment whose interpretation is the referent C of the discourse deictic. As the example makes clear, a discourse deictic cannot almost never refer to a segment in which it is contained. The only exception is textual deixis, as in (2.46), where the demonstrative can refer to the text in which it is contained. is a true sentence.] % @C † % 5yxw 8 @C † % 5yxw (2.46) [ is a neat idea.] % (2.45) *[ Referring functions have been used by [Nun79] to model how nouns in general achieve their reference. 29
  • 47. To argue that the referent will not be found in a node that is dominated by the node containing the discourse deictic, [Sto94] first evokes the use of discourse relations, arguing that if a discourse deictic refers to a segment, it will also be in a discourse relation with that segment. He then argues that while discourse deictic reference to embedded clauses might be viewed as an exception to this generality, this exception can be avoided by replacing Webber’s use of referring functions with a possible world semantics in which the semantic interpretation of the elements at the right frontier of the discourse make a variety of ’entity’ interpretations, or “information states”(see [Kra89]), available to the discourse deictic. For example, he argues that modality in (2.47) makes available assertions about at least two information states: (1) Mary left, and (2) John thought the context asserted of (1). The discourse deictic in (2.48a) references the first information state, and that in (2.48b) references the second information state. (2.47) John thought Mary left. (2.48a) He thought this happened yesterday. (2.48b) This was wrong. [DH95] take a view similar to [Web91], except they argue that type coercion is just one of the possible referent-creating operations evoked by the use of a discourse deictic. They argue that each time an anaphor is used, the degree to which its antecedent is “there” will vary, and the effort needed to “pick it up” will vary. In their view traditional “coreference” as the most trivial case: the result of applying the identity relation to the antecedent’s extension. They propose that at least the following operations are needed to explain how the referent of a discourse deictic is created: Summation and complex creation: These operations assemble sets. A set can be assembled by logical conjunction, as in (2.49), or by other discourse relations, as in (2.50) (brackets indicate the discourse where the operation creates the antecedent): (2.49) [Interest rates rose. The recession may reduce inflation. Capital taxation is lower.] This means brighter times for those who have money to save. (2.50) [If a white person drives this car it’s a “classic”. If I, a Mexican-American, drives it, it 30 r
  • 48. is a “low-rider”. ] That hurts my pride. r Type coercion: This operation is as above, when the semantics of an element in the clause containing the deictic causes an expression to be coerced into one particular interpretation. For example, the verb can coerce an interpretation, as in (2.51) where “happen” coerces an event interpretation, or the predicate nominal can coerce an interpretation, as in (2.52). (2.51) Mary was fired. That happened last week. (2.52) I turned left. This was a wise decision. Abstraction and Substitution: r The abstraction operation abstract away from specific events, as in (2.53), where the antecedent is “beating one’s wife” not “Smith’s beating his wife”, while the substitution operation substitutes one element of the antecedent for another, as in (2.54), where the antecedent is “X beats his wife”: (2.53) Smith beats his wife although this was forbidden 50 years ago. (2.54) Smith beats his wife and John does it too. Regardless of whether we assume that clauses already make available a set of semantic values, or whether we use a referring function or one of any number of operations to represent how these values are made available, discourse deixis use doesn’t determine which entity interpretation(s) is (are) chosen as the referent. Within the domain of the right frontier, the semantics of the clause containing the discourse deictic will determine which of the available objects are selected. In particular, as [Ash93] notes, the sub-categorization frame of the verb should restrict the possible referents. So while the embedded clause in (2.55) can be interpreted as a variety of abstract objects, thinks sub-categorizes for a proposition interpretation of “Mary is a genius”, as does the complex form be certain of. Similarly, happen sub-categorizes for an event interpretation, surprise sub-categorizes for a fact interpretation. (2.55) John thinks that [Mary is a genius]. John is certain of it. 31