北京邮电大学学报

  • EI核心期刊

北京邮电大学学报 ›› 2019, Vol. 42 ›› Issue (3): 21-28.doi: 10.13190/j.jbupt.2018-289

• 论文 • 上一篇    下一篇

基于胶囊的英文文本蕴含识别方法

朱皓, 谭咏梅   

  1. 北京邮电大学 智能科学与技术中心, 北京 100876
  • 收稿日期:2018-11-16 出版日期:2019-06-28 发布日期:2019-06-20
  • 作者简介:朱皓(1994-),男,硕士生,E-mail:hzhu@bupt.edu.cn;谭咏梅(1975-),女,副教授.

English Textual Entailment Recognition Using Capsules

ZHU Hao, TAN Yong-mei   

  1. Intelligence Science and Technology Center, Beijing University of Posts and Telecommunications, Beijing 100876, China
  • Received:2018-11-16 Online:2019-06-28 Published:2019-06-20

摘要: 提出了一种基于胶囊的英文文本蕴含识别方法.分别为每一种蕴含关系构建一个胶囊,用于模拟此蕴含关系的识别,并将其指定为该胶囊的属性.给定两段文本,经过highway编码层和序列编码层获取语义表示,分别输入胶囊中,依次通过其内部的交互模块、比较模块和聚合模块.交互模块利用交互注意力机制提取文本间的局部交互特征,比较模块和聚合模块使用前馈神经网络进行语义信息比较和聚合.最后对所有胶囊的输出归一化,得到两段文本的蕴含关系.该方法在SNLI测试集上的准确率为89.2%,在MultiNLI匹配测试集和不匹配测试集上的准确率分别为77.4%和76.4%.对交互模块中注意力关系矩阵的可视化分析结果验证了胶囊在英文文本蕴含识别任务中的有效性.

关键词: 文本蕴含识别, 胶囊, 交互注意力机制

Abstract: An English textual entailment recognition method using capsules is presented. This method builds a capsule for each relationship to model the recognition of this relationship and assigns it as the attribute of the capsule. Given two texts, they are first encoded by highway encoding layer and sequence encoding layer to obtain the semantic representations, and then fed into each capsule, passing through its internal interaction module, comparison module and aggregation module in turn. The interaction module uses the inter-attention mechanism to extract local interactive features between texts. The comparison module and the aggregation module use the feedforward network to compare and aggregate the semantic information. Finally, the output of all capsules is normalized to obtain the relationship between the two texts. The accuracy on SNLI test dataset is 89.2%, the accuracy on MultiNLI matched and mismatched test dataset is 77.4% and 76.4%. The visual analysis of the attentional relationship matrix of interaction module also verifies the effectiveness of capsules in the English textual entailment recognition task.

Key words: textual entailment recognition, capsules, inter-attention mechanism

中图分类号: