[1] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems. MountainView:[s.n.], 2017:5998-6008. [2] Grishman R, Sundheim B. Message understanding conference-6:a brief history[C]//Proceedings of the 16th Conference on Computational Linguistics. Copenhagen:Association for Computational Linguistics, 1996:466-471. [3] Bikel D M, Schwartz R, Weischedel R M. An algorithm that learns what's in a name[J]. Machine Learning, 1999, 34(1/2/3):211-231. [4] Sekine S, Grishman R, Shinnou H. A decision tree method for finding and classifying names in Japanese texts[J]. Proceeding of the 6th Workshop on Very Large Corpora, 1998(5):171-178. [5] Borthwick A. A maximum entropy approach to named entity recognition[J]. Thesis New York University, 1999, 36(1):4701-4708. [6] McCallum A, Li Wei. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons[C]//Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. Edmonton:Association for Computational Linguistics, 2003:188-191. [7] Zhang Yue, Yang Jie. Chinese NER using lattice LSTM[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers). Melbourne:Association for Computational Linguistics, 2018:1554-1564. [8] Jawahar G, Sagot B, Seddah D. What does BERT learn about the structure of language?[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence:Association for Computational Linguistics, 2019:3651-3657. [9] Peinelt N, Nguyen D, Liakata M. tBERT:topic models and BERT joining forces for semantic similarity detection[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2020:7047-7055. [10] Chai Duo, Wu Wei, Han Qinghong, et al. Description based text classification with reinforcement learning[C]//PMLR. Austria Vienna:[s.n.], 2020:1371-1382. [11] Qu Chen, Yang Liu, Qiu Minghui, et al. BERT with history answer embedding for conversational question answering[C]//Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Paris:ACM, 2019:1133-1136. [12] Dai Zhuyun, Callan J. Deeper text understanding for IR with contextual neural language modeling[C]//Procee-dings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Paris:ACM, 2019:985-988. [13] 赵平, 孙连英, 万莹, 等. 基于BERT+BiLSTM+CRF的中文景点命名实体识别[J]. 计算机系统应用, 2020, 29(6):169-174. Zhao Ping, Sun Lianying, Wan Ying, et al. Chinese scenic spot named entity recognition based on BERT+BiLSTM+CRF[J]. Computer Systems & Applications, 2020, 29(6):169-174. [14] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780. [15] 张玉帅, 赵欢, 李博. 基于BERT和BiLSTM的语义槽填充[J]. 计算机科学, 2021, 48(1):247-252. Zhang Yushuai, Zhao Huan, Li Bo. Semantic slot filling based on BERT and BiLSTM[J]. Computer Science, 2021, 48(1):247-252. [16] 李丽双, 郭元凯. 基于CNN-BLSTM-CRF模型的生物医学命名实体识别[J]. 中文信息学报, 2018, 32(1):116-122. Li Lishuang, Guo Yuankai. Biomedical named entity recognition with CNN-BLSTM-CRF[J]. Journal of Chinese Information Processing, 2018, 32(1):116-122. |