北京邮电大学学报

  • EI核心期刊

北京邮电大学学报 ›› 2023, Vol. 46 ›› Issue (5): 106-111.

• 论文 • 上一篇    下一篇

基于 ERC Roberta 的提示学习实现对话情感识别

宫岐伟,禹可,吴晓非   

  1. 北京邮电大学

  • 收稿日期:2022-09-20 修回日期:2022-10-20 出版日期:2023-10-28 发布日期:2023-11-03
  • 通讯作者: 禹可 E-mail:yuke@bupt.edu.cn
  • 基金资助:
    国家自然科学基金资助项目;“111计划”资助项目

PERC Roberta:Emotion Recognition in Conversation using ERC Roberta with Learning

Qi-Wei GONG, ,   

  • Received:2022-09-20 Revised:2022-10-20 Online:2023-10-28 Published:2023-11-03
  • Supported by:
    The National Natural Science Foundation of China;The 111 Project of China

摘要: 对话中的情感识别任务在现实生活中应用领域广泛,因此受到了越来越多的关注。对话中的文本,既包含了说话人信息,又与前文联系紧密,所以其拥有独特的语序与结构特征。使用基于Transformer架构的预训练模型参与训练的情感对话识别研究,已经取得了优秀的结果,但是其中传统的微调分类方法无法充分考虑到对话文本的语序与结构特征,并且会导致情感对话任务与预训练任务不匹配。而提示学习,通过重建下游任务能够缩小与预训练任务的差距。由此提出了PERC Roberta模型,该模型首先通过文本掩码预测任务,学习对话的语序以及结构特征,然后通过提示学习重建下游任务,进一步激发模型中学习到的丰富对话知识。该模型在两个对话情感识别公共数据集MELD和EmoryNLP上进行的实验取得了良好的结果,消融实验的对比也证明了提出方法的有效性。我们的模型和数据集的处理过程代码在github仓库1公开。

关键词: 自然语言处理, 对话情感识别, 提示学习

Abstract: With a broad area of its applications, the task of emotion recognition in conversation has increasingly attracted attention. The text in the dialogue contains information about the speakers and links closely with the preceding ones, thus a particular word order and structural features are represented by it. Excellent results have been obtained in studies on emotion recognition in conversation using transformer-based pre-training models. However, its traditional classification approaches cannot take into account conversational word order and structural feature. And a mismatch will occur between the downstream task and the pre-trained task. learning can narrow the gap between them by reconstructing downstream tasks. Therefore, the PERC Roberta model is proposed. This model first learns word order and structural features of the dialogue by predicting masked texts and then reconstructs the downstream task through ing learning, thus a richer dialogue knowledge distributed in the pre-training model can be further stimulated. The experiments conducted on two public data, MELD and EmoryNLP, demonstrate the superior performance of the proposed PERC Roberta model. Further, the ablation experimental results also prove the effectiveness of each step in the PERC Roberta model. The code is publicly available on GitHub repository1.

Key words: natural language processing, emotional recognition in conversation, learning

中图分类号: