Abstract
Entity Anaphora Resolution (AR) is the task of identifying and linking mentions of entities across a text, ensuring that different reference expressions point to the same entity. Traditionally, AR assumes that two Noun Phrases (NPs) with the same literal meaning refer to the same entity. However, in practice, such NPs may not always indicate the same entity or the same aspect of an entity due to shifts in context, semantic interpretation, or changes in the state of the entity. This literal-meaning-first approach can cause referential ambiguities that can propagate through downstream Natural Language Processing (NLP) components.To address these linguistic ambiguities, this dissertation introduces an anaphora scale—a fine-grained typology of anaphoric relations—that captures a spectrum ranging from strict identity to non-anaphoric among entity mentions. Additionally, we incorporate Dense Paraphrasing (DP), a semantic enrichment strategy to decontexualize NPs, to further clarify meaning through verbal, nominal, or structural restatements. Using this framework, we create and curate novel datasets in metonymy and procedural texts where referential ambiguity commonly arises, and, guided by human-annotator disagreement analyses, compile a corpus that deliberately preserves ambiguous cases to test AR systems. The experiments conducted in this research indicate that clearly defined, fine-grained anaphoric relations can reduce referential ambiguity and enhance AR, although resolving these relations remains challenging for current NLP models.