[1] LIU Y H, OTT M, GOYAL N, et al. RoBERTa:a robustly optimized BERT pretraining approach[EB/OL]. (2019-07-26)[2021-08-26]. https://arxiv.org/pdf/1907.11692.pdf. [2] ZHANG S, ZHANG X, WANG H, et al. Chinese medical question answer matching using end-to-end character-level multi-scale CNNs[J/OL]. Applied Sciences, 2017, 7(8)[2021-08-26]. https://www.mdpi.com/2076-3417/7/8/767. [3] ZHANG S, ZHANG X, WANG H, et al. Multi-scale attentive interaction networks for Chinese medical question answer selection[J]. IEEE Access, 2018, 6:74061-74071. [4] TIAN Y H, MA W C, XIA F, et al. ChiMed:a Chinesemedical corpus for question answering[C]//Proceedings of the 18th BioNLP Workshop and Shared Task.[S.l.]:Association for Computational Linguistics, 2019:250-260. [5] HE J Q, FU M M, TU M S. Applying deep matching networks to Chinese medical question answering:a study and a dataset[J]. BMC Medical Informatics and Decision Making, 2019, 19(Sup 2):52. [6] CUI X T, HAN J G. Chinese medical question answer matching based on interactive sentence representation learning[J]. Computer Science & Information Technology, 2020, 8(5):93-109. [7] GOODFELLOW I, JEANP-A, MEHDIM, et al. Generative adversarial nets[C]//Neural Information Processing Systems. Montreal:[s.n.], 2014:27-36. [8] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks:proceedings of "International Conference on Learning Representations"[C/OL].[S.l.]:ICLR,2014[2021-08-26]. http://arxiv.org/abs/1312.6199. [9] MIYATO T, DAI A M, GOODFELLOW I. Adversarial training methods for semi-supervised text classification:proceedings of "International Conference on Learning Representations"[C/OL].[S.l.]:ICLR, 2017[2021-08-26]. https://arxiv.org/abs/1605.07725. [10] ZHU C, CHENG Y, GAN Z, et al. FreeLB:enhanced adversarial training for language understanding:proceedings of "International Conference on Learning Representations"[C/OL].[S.l.]:ICLR, 2020[2021-08-26]. https://openreview.net/forum?id=Fk_47QnlVy. [11] JU Y, ZHAO F B, CHEN S J, et al. Technical report on conversational question answering[EB/OL]. (2019-09-24)[2021-08-26]. https://arxiv.org/pdf/1909.10772.pdf. [12] DEVLINJ, CHANGMW, LEEK, et al. BERT:pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volumn 1(Long and Short Papers). Minneapolis:NAACL, 2019:4171-4186. [13] CUI Y M, CHE W X, LIU T, et al. Revisiting pre-trained models for Chinese natural language processing[C]//Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2020:657-668. [14] GOODFELLOW I, JONATHON S, CHRISTIAN S. Explaining and harnessing adversarial examples[EB/OL]. (2015-03-20)[2021-08-26]. https://arxiv.org/pdf/1412.6572.pdf. [15] XIE C H, WU Y X, MAATEN L, et al. Feature denoising for improving adversarial robustness[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2019:501-509. |