Journal of Beijing University of Posts and Telecommunications

  • EI核心期刊

Journal of Beijing University of Posts and Telecommunications ›› 2023, Vol. 46 ›› Issue (3): 19-24.

Previous Articles     Next Articles

A Graph Neural Network Accelerator Optimization Research on Heterogeneous Architecture

WU Jin, ZHAO Bo, WEN Heng, WANG Yu   

  • Received:2022-05-06 Revised:2022-07-29 Online:2023-06-28 Published:2023-06-05

Abstract:

In order to improve the computational power and efficiency of graph neural networks, the problems of large memory requirements and random memory access in the training process of graph neural networks are studied, and a high-performance graph neural network accelerator design based on heterogeneous architecture is proposed. The heterogeneous platform adopts the combination of central processor and field programmable gate array, which is mainly composed of a calculation module and a buffer module. Design different hardware architectures to implement hybrid computing modules ; the buffer module provides a buffer for the input node features and intermediate variables. Aiming at the two mixed execution modes with irregular and regular aggregation and update, the calculation module is improved, and the accelerator is optimized for data parallelism and redundancy removal. Experiments on the Ultra96-V2 hardware platform show that the designed graph neural network accelerator not only improves the system performance, but also significantly reduces the power consumption.

Key words: graph neural network accelerator, heterogeneous architecture, hybrid computing

CLC Number: