Journal of Beijing University of Posts and Telecommunications

  • EI核心期刊

JOURNAL OF BEIJING UNIVERSITY OF POSTS AND TELECOM ›› 2019, Vol. 42 ›› Issue (6): 76-83.doi: 10.13190/j.jbupt.2019-149

• Papers • Previous Articles     Next Articles

Based on Multiple Probing Tasks Fine-Tuning of Language Models for Text Classification

FU Qun-chao, WANG Cong   

  1. 1. School of Software Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China;
    2. Key Laboratory of Trustworthy Distributed Computing and Service(Beijing University of Posts and Telecommunications), Ministry of Education, Beijing 100876, China
  • Received:2019-11-22 Online:2019-12-28 Published:2019-11-15

Abstract: Pre-trained language models are widely used in many natural language processing tasks, but there is no fine-tuning for different tasks. Therefore, for text classification task, the author proposes a method of fine-tuning language model based on probing task, which utilizes the specific linguistic knowledge of probing task training model, and improves the performance of the model in text classification task. Six probing tasks are given to cover the shallow information of sentences, grammar and semantics. The method is shown validated on six text classification datasets, and classification error rate is improved.

Key words: probing task, language model, multiple task, text classification

CLC Number: