Logo image
Chinese UMR annotation: Can LLMs help?
Conference proceeding

Chinese UMR annotation: Can LLMs help?

Haibo Sun, Nianwen Xue, Jin Zhao, Liulu Yue, Keer Xu, Yao Sun and Jiawei Wu
PROCEEDINGS OF THE FIFTH INTERNATIONAL WORKSHOP ON DESIGNING MEANING REPRESENTATIONS @ LREC-COLING 2024, pp.131-139
International Conference on Computational Linguistics Language Resources and Evaluation
01/01/2024

Abstract

Computer Science Computer Science, Artificial Intelligence Computer Science, Theory & Methods Language & Linguistics Linguistics Science & Technology Social Sciences Technology
We explore using LLMs, GPT-4 specifically, to generate draft sentence-level Chinese Uniform Meaning Representations (UMRs) that human annotators can revise to speed up the UMR annotation process. In this study, we use few-shot learning and Think-Aloud prompting to guide GPT-4 to generate UMR sentence-level graphs. Our experimental results show that compared with annotating UMRs from scratch, using LLMs as a preprocessing step reduces the annotation time by two thirds on average. This indicates that there is great potential to integrate LLMs into the pipeline for complicated semantic annotation tasks.

Metrics

1 Record Views

Details

Logo image