[1] Rajpurkar P, ZhangJian, Lopyrev K, et al. SQuAD:100, 000+ questions for machine comprehension of text[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA:Association for Computational Linguistics, 2016:2383-2392.
[2] Rajpurkar P, Jia R, Liang P. Know what you don't know:unanswerable questions for SQuAD[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2:Short Papers). Stroudsburg, PA, USA:Association for Computational Linguistics, 2018, 2:784-789.
[3] Sun F, Li L Y, Qiu X P, et al. U-net:machine reading comprehension with unanswerable questions[EB/OL]. 2018(2018-10-12)[2019-11-18]. https://arxiv.org/abs/1810.06638.
[4] Liu X D, Li W, Fang Y W, et al. Stochastic answer networks for SQuAD 2.0[EB/OL]. 2018(2018-09-24)[2019-11-18].https://arxiv.org/abs/1809.09194.
[5] Hu Minghao, Wei Furu, Peng Yuxing, et al. Read + verify:machine reading comprehension with unanswerable questions[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33:6529-6537.
[6] Wang S H, Jiang J, Learning natural language inference with LSTM[EB/OL]. 2015(2016-11-10)[2019-11-18]. https://arxiv.org/abs/1512.08849v2.
[7] Parikh A P, Täckström O, Das D, et al. A decomposable attention model for natural language inference[EB/OL]. 2016(2016-09-25)[2019-11-18]. https://arxiv.org/abs/1606.01933v2.
[8] Vinyals Q, Fortunato M, Jaitly N, Pointer networks[C]//Advances in Neural Information Processing Systems. New York:Curran Associates, 2015:2692-2700.
[9] Hu Minghao, Peng Yuxing, Huang Zhen, et al. Reinforced mnemonic reader for machine reading comprehension[C]//Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. California:International Joint Conferences on Artificial Intelligence Organization, 2018:4099-4106.
[10] Liu Xiaodong, Shen Yelong, Duh K, et al. Stochastic answer networks for machine reading comprehension[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers). Stroudsburg, PA, USA:Association for Computational Linguistics, 2018:1694-1704.
[11] Levy O, Seo M, Choi E, et al. Zero-shot relation extraction via reading comprehension[C]//Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). Vancouver, USA:Association for Computational Linguistics, 2017:333-342.
[12] Pennington J, Socher R, Manning C. Glove:global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha:Association for Computational Linguistics, 2014:1532-1543.
[13] McCann B, Bradbury J, Xiong C M, et al. Learned in translation:contextualized word vectors[C]//Advances Inneural Information Processing Systems. New York:Curran Associates, 2017:6294-6305.
[14] Huang H Y, Zhu C G, Shen Y L, et al. Fusionnet:fusing via fully-aware attention with application to machine comprehension[EB/OL]. 2017(2018-02-04)[2019-11-18]. https://arxiv.org/abs/1711.0734-1v2. |