Paper
15 July 2022 Guiding transformer to generate graph structure for AMR parsing
Runliang Niu, Qi Wang
Author Affiliations +
Proceedings Volume 12258, International Conference on Neural Networks, Information, and Communication Engineering (NNICE 2022); 1225823 (2022) https://doi.org/10.1117/12.2639102
Event: International Conference on Neural Networks, Information, and Communication Engineering (NNICE 2022), 2022, Qingdao, China
Abstract
Abstract Meaning Representation (AMR) is a kind of semantic representation of natural language, which aims to represent the semantics of a sentence by a rooted, directed, and acyclic graph (DAG). Most existing AMR parsing works are designed under specific dictionary. However, these works make the content length of each node limited, and they mainly need to go through a very complicated post-processing process. In this paper, we propose a novel encoder-decoder framework for AMR parsing to address these issues, which generates a graph structure and predicts node relationships simultaneously. Specifically, we represent each node as a five-tuple form, containing token sequence of variable length and the connection relationship with other nodes. BERT model is employed as the encoder module. Our decoder module first generates a linearization representation of the graph structure, then predicts multiple elements of each node by four different attention based classifiers. We also found an effective way to improve the generalization performance of Transformer model for graph generation. By assigning different index number to nodes in each training step and remove positional encoding used in most generative models, the model can learn the relationship between nodes better. Experiments against two AMR datasets demonstrate the competitive performance of our proposed method compared with baseline methods.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Runliang Niu and Qi Wang "Guiding transformer to generate graph structure for AMR parsing", Proc. SPIE 12258, International Conference on Neural Networks, Information, and Communication Engineering (NNICE 2022), 1225823 (15 July 2022); https://doi.org/10.1117/12.2639102
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Transformers

Autoregressive models

Feature extraction

Visibility

RELATED CONTENT


Back to Top