<div class="csl-bib-body">
<div class="csl-entry">Jang, M., & Lukasiewicz, T. (2023). Improving Language Models’ Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary. In H. Bouamor, J. Pino, & K. Bali (Eds.), <i>Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing</i> (pp. 8496–8510). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.emnlp-main.527</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/192510
-
dc.description.abstract
The non-humanlike behaviour of contemporary pre-trained language models (PLMs) is a leading cause undermining their trustworthiness. A striking phenomenon of such faulty behaviours is the generation of inconsistent predictions, which produces logically contradictory results, such as generating different predictions for texts delivering the same meaning or violating logical properties. Previous studies exploited data augmentation or implemented specialised loss functions to alleviate the issue. However, their usage is limited, because they consume expensive training resources for large-sized PLMs and can only handle a certain consistency type. To this end, we propose a practical approach that alleviates the inconsistent behaviour issue by fundamentally improving PLMs’ meaning awareness. Based on the conceptual role theory, our method allows PLMs to capture accurate meaning by learning precise interrelationships between concepts from word-definition pairs in a dictionary. Next, we propose an efficient parameter integration technique that updates only a few additional parameters to combine the learned interrelationship with PLMs’ pre-trained knowledge. Our experimental results reveal that the approach can concurrently improve multiple types of consistency, enables efficient knowledge integration, and easily applies to other languages.
en
dc.language.iso
en
-
dc.subject
pre-trained language models
en
dc.subject
meaning understanding
en
dc.subject
consistency
en
dc.subject
learning conceptual roles from dictionary
en
dc.title
Improving Language Models’ Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary
en
dc.type
Inproceedings
en
dc.type
Konferenzbeitrag
de
dc.relation.publication
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
-
dc.contributor.affiliation
University of Oxford, United Kingdom of Great Britain and Northern Ireland (the)
-
dc.contributor.editoraffiliation
Carnegie Mellon University Qatar, Qatar
-
dc.description.startpage
8496
-
dc.description.endpage
8510
-
dc.type.category
Full-Paper Contribution
-
tuw.booktitle
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing