Journal of Beijing University of Posts and Telecommunications

  • EI核心期刊

Journal of Beijing University of Posts and Telecommunications ›› 2023, Vol. 46 ›› Issue (3): 31-36.

Previous Articles     Next Articles

Filter Pruning Algorithm Based on Deep Reinforcement Learning

LIU Yang, TENG Yinglei, NIU Tao, ZHI Jialin   

  • Received:2022-06-03 Revised:2022-07-10 Online:2023-06-28 Published:2023-06-05

Abstract:

When the deep neural network model is deployed on the terminal device, it faces the problem of insufficient computing capabilities and storage resources. Model pruning provides an effective model compression method, which can reduce the number of parameters and reduce the computational complexity while ensuring the accuracy of the model. However, the traditional pruning methodsmostly rely on prior knowledge to set pruning rate and pruning standard. They ignore the pruning sensitivity and parameter distribution difference of different layers of deep model, and lack fine-grained optimization. To solve this problem, a filter pruning scheme based on reinforcement learning is proposed to minimize the precision loss of the model after pruning while satisfying the target sparsity. In the proposed scheme, the parameterized deep Q-networks algorithm is used to solve the constructed nonlinear optimization problem with mixed variables. Experimental results show that the proposed scheme can select suitable pruning standard and pruning rate for each layer, and reduce the precision loss of the model after pruning.

Key words: edge computing, deep learning model, filter pruning, deep reinforcement learning

CLC Number: