[1] QIU X P, SUN T X, XU Y G, et al. Pre-trained models for natural language processing:a survey[J]. Science China Technological Sciences, 2020, 63(10):1872-1897. [2] JIA Q, ZHANG D Z, YANG S B, et al. Traditional Chinese medicine symptom normalization approach leveraging hierarchical semantic information and text matching with attention mechanism[J/OL]. Journal of Biomedical Informatics, 2021, 116(6):103718[2021-07-30]. https://doi.org/10.1016/j.jbi.2021.103718. [3] TUTUBALINA E, MIFTAHUTDINOV Z, NIKOLENKO S, et al. Medical concept normalization in social media posts with recurrent neural networks[J]. Journal of Biomedical Informatics, 2018, 84:93-102. [4] DENG P, CHEN H P, HUANG M Y, et al. An ensemble CNN method for biomedical entity normalization[C]//Proceedings of the 5th Workshop on BioNLP Open Shared Tasks. Stroudsburg:ACL, 2019:143-149. [5] AN Y, WANG J L, ZHANG L, et al. PASCAL:a pseudo cascade learning framework for breast cancer treatment entity normalization in Chinese clinical text[J]. BMC Medical Informatics and Decision Making, 2020, 20(1):1-12. [6] CHEN Q, ZHU X D, LING Z H, et al. Enhanced LSTM for natural language inference[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers). Stroudsburg:ACL, 2017:1657-1668. [7] DAI Z Y, XIONG C Y, CALLAN J, et al. Convolutional neural networks for soft-matching n-grams in Ad-hoc search[C]//Proceedings of the 11th ACM International Conference on Web Search and Data Mining. New York:ACM, 2018:126-134. [8] DEVLIN J, CHANG M W, LEE K, et al. BERT:pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of NAACL-HLT. Stroudsburg:ACL, 2019:4171-4186. [9] LIU Y H, OTT M, GOYAL N, et al. RoBERTa:a robustly optimized BERT pretraining approach[EB/OL]. (2019-07-26)[2021-07-30]. https://arxiv.org/abs/1907.11692. [10] SUN Y, WANG S H, LI Y K, et al. ERNIE:enhanced representation through knowledge integration[EB/OL]. (2019-04-19)[2021-07-30]. https://arxiv.org/abs/1904.09223. [11] CLARK K, LUONG M T, LE Q V, et al. ELECTRA:pre-training text encoders as discriminators rather than generators[EB/OL].(2020-03-23)[2021-07-30].https://arxiv.org/abs/2003.10555. |