Title: Enhancing Relation Classification by Using Shortest Dependency Paths Between Entities with Pre-trained Language Models
Advisor: Tunga Güngör
Abstract:
Relation Extraction (RE) is the task of finding the relation between entities from a plain text. As the length of the text increases, finding the relation becomes more challenging. The shortest dependency path (SDP) between two entities, obtained by traversing the terms in the text’s dependency tree, provides a view focused on the entities by pruning noisy words. In RE’s supervised form Relation Classification, the state-of-the-art methods generally integrate a pre-trained language model (PLM) into their approaches. However, none of them incorporates the shortest dependency paths into their calculations to our knowledge.
In this thesis, we investigate the effects of using shortest dependency paths with pre-trained language models by taking the R-BERT relation classification model as our baseline and building upon it. Our novel approach enhances the baseline model by adding the sequence representation of the shortest dependency path between entities, collected from PLMs, as an additional embedding. In experiments, we have evaluated the proposed model’s performance for each combination of SDPs generated from Stanford, HPSG, LAL dependency parsers, and baseline with BERT and XLNet PLMs in two datasets, SemEval-2010 Task 8 and TACRED.
We improve the baseline model by absolute 1.41% and 3.6% scores, increasing the rankings of the model from 8th to 7th and 18th to 7th in SemEval-2010 Task 8 and TACRED, respectively.