Mtumbuka, F., & Lukasiewicz, T. (2022). Syntactically Rich Discriminative Training: An Effective Method for Open Information Extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 5972–5987). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.emnlp-main.401
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
-
Datum (veröffentlicht):
2022
-
Veranstaltungsname:
The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022)
en
Veranstaltungszeitraum:
7-Dez-2022 - 11-Dez-2022
-
Veranstaltungsort:
Abu Dhabi, Vereinigte Arabische Emirate
-
Umfang:
16
-
Verlag:
Association for Computational Linguistics
-
Peer Reviewed:
Ja
-
Keywords:
open information extraction; dependency trees; discriminative training approach
en
Abstract:
Open information extraction (OIE) is the task of extracting facts ''(Subject, Relation, Object){''} from natural language text. We propose several new methods for training neural OIE models in this paper. First, we propose a novel method for computing syntactically rich text embeddings using the structure of dependency trees. Second, we propose a new discriminative training approach to OIE in which tokens in the generated fact are classified as {``}real{''} or {``}fake{''}, i.e., those tokens that are in both the generated and gold tuples, and those that are only in the generated tuple but not in the gold tuple. We also address the issue of repetitive tokens in generated facts and improve the models{'} ability to generate implicit facts. Our approach reduces repetitive tokens by a factor of 23{\%}. Finally, we present paraphrased versions of the CaRB, OIE2016, and LSOIE datasets, and show that the models{'} performance substantially improves when trained on augmented datasets. Our best model beats the SOTA of IMoJIE on the recent CaRB dataset, with an improvement of 39.63{\%} in F1 score.