[1] Wang Z, Liu J, Xiao X, et al. Joint training of candidate extraction and answer selection for reading comprehension[C]//ACL 2018. Melbourne:ACL, 2018:1715-1724. [2] Heath J, Kamp H, Reyle U. From discourse to logic:introduction to model theoretic semantics of natural language, formal logic and discourse representation theory[J]. Language, 1995, 71(4):823. [3] Hermann K M, Kocisky T, Grefenstette E, et al. Teaching machines to read and comprehend[C]//NIPS 2015. Montréal:MIT Press, 2015:1693-1701. [4] Lai G, Xie Q, Liu H, et al. Race:large-scale reading comprehension dataset from examinations[C]//EMNLP 2017. Copenhagen:ACL, 2017:785-794. [5] Rajpurkar P, Zhang J, Lopyrev K, et al. Squad:100,000+ questions for machine comprehension of text[C]//EMNLP 2016. Austin:ACL, 2016:2383-2392. [6] Devlin J, Chang M W, Lee K, et al. Bert:pre-training of deep bidirectional transformers for language understanding[C]//NAACL-HIT 2019. Minneapolis:ACL, 2019:4171-4186. [7] Trischler A, Wang T, Yuan X, et al. Newsqa:a machine comprehension dataset[C]//ACL workshop 2017. Vancouver:ACL, 2017:191-200. [8] Nguyen T, Rosenberg M, Song X, et al. Ms-marco:a human generated machine reading comprehension dataset[EB/OL]. (2018-10-31)[2019-05-28]. https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf. [9] He W, Liu K, Liu J, et al. Dureader:a chinese machine reading comprehension dataset from real-world applications[C]//ACL workshop 2018. Melbourne:ACL, 2018:37-46. [10] Lin C. Rouge:a package for automatic evaluation of summaries[C]//WAS 2004. Barcelona:ACL, 2004:74-81. [11] Hirschman L, Light M, Breck E, et al. Deep read:a reading comprehension system[C]//ACL 1999. Maryland:ACL, 1999:325-332. [12] Riloff E, Thelen M. A rule-based question answering system for reading comprehension tests[C]//NAACL/ANLP workshop 2000. ACL, 2000:13-19. [13] Hao X, Chang X, Liu K. A rule-based chinese question answering system for reading comprehension tests[C]//IIH-MSP 2007.[S.l.]:IEEE, 2007:325-329. [14] Cui Y, Chen Z, Wei S, et al. Attention-over-attention neural networks for reading comprehension[C]//ACL 2017. Vancouver:ACL, 2017:593-602. [15] Wang S, Jiang J. Machine comprehension using match-lstm and answer pointer[C]//ICLR 2017. Toulon:Proceedings, 2017. [16] Kadlec R, Schmid M, Bajgar O, et al. Text understanding with the attention sum reader network[C]//ACL 2016. Berlin:ACL, 2016:908-918. [17] Chen D, Bolton J, Manning C D. A thorough examination of the cnn/daily mail reading comprehension task[C]//ACL 2016. Berlin:ACL, 2016:2358-2367. [18] Cui Y, Liu T, Chen Z, et al. Consensus attention-based neural networks for chinese reading comprehension[C]//COLING 2016. Osaka:ACM Press, 2016:1777-1786. [19] Sukhbaatar S, Weston J, Fergus R. End-to-end memory networks[C]//NIPS 2015. Montréal:MIT Press, 2015:2440-2448. [20] Dhingra B, Liu H, Yang Z, et al. Gated-attention readers for text comprehension[C]//ACL 2017. Vancouver:ACL, 2017:1832-1846. [21] Tseng B H, Shen S S, Lee H Y, et al. Towards machine comprehension of spoken content:initial toefl listening comprehension test by machine[C]//Interspeech 2016. San Francisco:ISCA, 2016:2731-2735. [22] Zhu H, Wei F, Qin B, et al. Hierarchical attention flow for multiple-choice reading comprehension[C]//AAAI 2018. New Orleans:AAAI Press, 2018:6077-6085. [23] Parikh S, Sai A B, Nema P, et al. Eliminet:a model for eliminating options for reading comprehension with multiple choice questions[C]//IJCAI 2018. Stockholm:Morgan Kaufmann, 2018:4272-4278. [24] Xu Y, Liu J, Gao J, et al. Dynamic fusion networks for machine reading comprehension[EB/OL]. (2018-02-26)[2019-05-28]. https://arxiv.org/pdf/1711.04964.pdf. [25] Seo M, Kembhavi A, Farhadi A, et al. Bidirectional attention flow for machine comprehension[C]//ICLR 2017. Toulon:Proceedings, 2017. [26] Wang W, Yang N, Wei F, et al. R-net:machine reading comprehension with self-matching networks[EB/OL]. (2017-05-20)[2019-05-28]. https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf. [27] 桑志杰, 袁彩霞. 一种端到端的生成式问答模型[EB/OL]. 2019(2019-01-28)[2019-05-28]. http://www.paper.edu.cn/releasepaper/content/201901-186. [28] Tan C, Wei F, Yang N, et al. S-net:from answer extraction to answer generation for machine reading comprehension[EB/OL]. 2018(2018-01-02)[2019-05-28]. http://arxiv.org/abs/1706.04815. [29] See A, Liu P J, Manning C D. Get to the point:summarization with pointer-generator networks[C]//ACL 2017. Vancouver:ACL, 2017:1073-1083. [30] Yu A W, Dohan D, Luong M T, et al. Qanet:combining local convolution with global self-attention for reading comprehension[C]//ICLR 2018. Vancouver:Proceedings, 2018. [31] Nishida K, Saito I, Nishida K, et al. Multi-style generative reading comprehension[C]//ACL 2019, Florence:ACL, 2019:2273-2284. [32] Zhuang Y, Wang H. Token-level dynamic self-attention network for multi-passage reading comprehension[C]//ACL 2019, Florence:ACL, 2019:2252-2262. [33] Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training[EB/OL]. 2018(2018-12-03)[2019-05-28]. https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf. [34] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//NIPS 2017. Long Beach:MIT Press, 2017:5998-6008. [35] Mihaylov T, Frank A. Knowledgeable reader:enhancing cloze-style reading comprehension with external commonsense knowledge[C]//ACL 2018, Melbourne:ACL, 2018:821-832. [36] Hu M, Peng Y, Huang Z, et al. Retrieve, read, rerank:towards end-to-end multi-document reading comprehension[C]//ACL 2019, Florence:ACL, 2019:2285-2295. [37] Wang Y, Liu K, Liu J, et al. Multi-passage machine reading comprehension with cross-passage answer verification[C]//ACL 2018, Melbourne:ACL, 2018:1918-1927. [38] Xiao Y, Qu Y, Qiu L, et al. Dynamically fused graph network for multi-hop reasoning[C]//ACL 2019, Florence:ACL, 2019:6140-6150. [39] Trischler A, Ye Z, Yuan X, et al. Natural language comprehension with the epireader[C]//EMNLP 2016, Austin:ACL, 2016:128-137. |