[1] OHTSU N. A threshold selection method from gray-level histograms[J]. IEEE Transactions on Systems Man and Cybernetics, 1979, 9(1):62-66. [2] CHEN T S, XU M X, HUI X L, et al. Learning semantic-specific graph representation for multi-label image recognition[C]//Proceedings of the IEEE International Conference on Computer Vision. Seoul:IEEE, 2019:522-531. [3] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2):318-327. [4] GONG Y C, JIA Y Q, LEUNG T, et al. Deep convolutional ranking for multilabel image annotation[C]//Proceedings of the International Conference on Learning Representations. Banff:ICLR, 2014:1-9. [5] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs[C]//Proceeding of the International Conference on Learning Representations. San Diego:ICLR, 2015:357-361. [6] XIE S, GIRSHICK R, DOLLÁR P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu:IEEE, 2017:5987-5995. [7] WOO S, PARK J, LEE J Y, et al. Cbam:convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision. Munich:Springer Verlag, 2018:3-19. [8] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Deeplab:semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(4):834-848. [9] CHEN L C, ZHU Y K, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//Proceedings of the European Conference on Computer Vision. Munich:Springer Verlag, 2018:801-818. [10] ZHAO H S, SHI J P, QI X J, et al. Pyramid scene parsing network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu:IEEE, 2017:6230-6239. [11] RONNEBERGER O, FISCHER P, BROX T. U-net:convolutional networks for biomedical image segmenta-tion[C]//Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich:Springer, 2015:234-241. [12] BADRINARAYANAN V, KENDALL A, CIPOLLA R. Segnet:a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12):2481-2495. [13] WANG X L, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake:IEEE, 2018:7794-7803. [14] ZHAO H S, ZHANG Y, LIU S, et al. Psanet:point-wise spatial attention network for scene parsing[C]//Proceedings of the European Conference on Computer Vision. Munich:Springer Verlag, 2018:267-283. [15] FU J, LIU J, TIAN H J, et al. Dual attention network for scene segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach:IEEE, 2019:3146-3154. [16] HUANG Z L, WANG X G, HUANG L C, et al. CCNet:criss-cross attention for semantic segmentation[C]//Proceedings of the IEEE International Conference on Computer Vision. Seoul:IEEE, 2019:603-612. [17] CHEN Y P, KALANTIDIS Y, LI J S, et al. A2-nets:double attention networks[C]//Proceedings of the Advances in Neural Information Processing Systems. Montreal:NeurIPS, 2019:350-359. [18] LI X, ZHONG Z S, WU J L, et al. Expectation-maximization attention networks for semantic segmentation[C]//Proceedings of the IEEE International Conference on Computer Vision. Seoul:IEEE, 2019:9167-9176. [19] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas:IEEE, 2016:770-778. [20] WANG H Y, KEMBHAVI A, FARHADI A, et al. Elastic:improving CNNs with dynamic scaling policies[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach:IEEE, 2019:2258-2267. [21] 侯小刚. 传统文化图案分割关键技术研究[D]. 北京:北京邮电大学, 2021:30-34. |