Interpreting RandomlyWired GraphModels for Chinese NER

Jie Chen, Jiabao Xu, Xuefeng Xi, Zhiming Cui, Victor S. Sheng

Research output: Contribution to journalArticlepeer-review

Abstract

Interpreting deep neural networks is of great importance to understand and verify deep models for natural language processing (NLP) tasks. However, most existing approaches only focus on improving the performance of models but ignore their interpretability. In this work, we propose a Randomly Wired Graph Neural Network (RWGNN) by using graph to model the structure of Neural Network, which could solve two major problems (word-boundary ambiguity and polysemy) of ChineseNER. Besides, we develop a pipeline to explain the RWGNNby using Saliency Map and Adversarial Attacks. Experimental results demonstrate that our approach can identify meaningful and reasonable interpretations for hidden states of RWGNN.

Original languageEnglish
Pages (from-to)747-761
Number of pages15
JournalCMES - Computer Modeling in Engineering and Sciences
Volume134
Issue number1
DOIs
StatePublished - 2023

Keywords

  • Named entity recognition
  • graph neural network
  • interpretation
  • random graph network
  • saliency map

Fingerprint

Dive into the research topics of 'Interpreting RandomlyWired GraphModels for Chinese NER'. Together they form a unique fingerprint.

Cite this