Please wait a minute...

Journal of Beijing University of Posts and Telecommunications

  • EI核心期刊

Current Issue

    • Service Networking Over Hyper-WAN Integrated Distributed Cloud:Vision and Key Technologies
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 1-8.
    • Abstract ( 590 )       
    • With the development of cloud computing, edge computing, distributed cloud and other technologies, the future network should distribute and cooperate all kinds of service resources (computing power, data, content, etc.) intelligently and dynamically at multiple levels of “cloud edge”, and even across wide area network of multiple operators. To meet the service need and networking wish, Service Networking Over Hyper-WAN Integrated Distributed Cloud is proposed originality. Future network dual plane networking vision and layered architecture are introduced and the key technologies with cloud management platform and network operating system are proposed, such as service identification, dynamic networking routing, deterministic network and resource integrated scheduling in terms of principle and performance. Finally, the existing challenges and future directions are discussed.
    • Supplementary Material | Related Articles
    • Deterministic Scheduling and Routing Joint Intelligent Optimization Scheme in Computing First Network
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 9-14.
    • Abstract ( 1031 )       
    • The Compute first network (CFN) integrates heterogeneous computing power information with the network to improve resource utilization and network transmission efficiency. The time-sensitive network (TSN) ensures low-latency and high-reliability transmission performance. The fusion of the two can achieve high efficiency deterministic forwarding. The resource scheduling and routing planning in the integrated decision-making CFN and the gate control arrangement in the TSN will have problems such as too many decision variables, too high computational complexity, and insufficient optimization performance. In response to the above problems, a fusion architecture based on IEEE 802.1Qbv for gated arrangement, computing network routing planning, and computing resource scheduling is proposed. Based on deep reinforcement learning, an improved RBDQN (reward-back deep Q-learning) algorithm is proposed to optimize gate control list, and a greedy algorithm is used to assist routing path planning. The algorithm establishes a utility function based on the average delay, energy consumption and user satisfaction as multiple optimization indicators. The simulation results show that, compared with the genetic algorithm, RBDQN can reduce the convergence time of small-scale scheduling problems by more than 1 times, and can reduce the convergence time by dozens of times for multi-service and multi-node computing network problems. At the same time, the algorithm can avoid the model from falling into a local optimum. Compared with the traditional DQN, the decision result improves the performance of the utility function index by more than 10%, and the convergence time under the same index decreases by about 50%.
    • Supplementary Material | Related Articles
    • Research on Intelligent Unloading Algorithm for Road Collaborative Tasks Based on SMDP Model
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 15-21.
    • Abstract ( 300 )       
    • Aiming at the problem that the high mobility of vehicles in the vehicle-road coordination system makes it difficult for the edge computing node to control the delay, a task unloading strategy based on the semi-Markov (SMDP) process is proposed. The state space, behavior space, system yield and transfer probability of the road service node are defined to model the task wait queue, and the overall benefit is improved by increasing the vehicle task completion rate within the coverage of the service node, and the Bellman equation is used to iterate to make the state space reach the optimal system. The numerical value shows that the task unloading decision of the algorithm can effectively improve the overall benefit of the vehicle-road collaboration system.
    • Supplementary Material | Related Articles
    • Content Caching Scheme Based on Federated Learning in Fog Computing Networks
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 22-28.
    • Abstract ( 350 )       
    • With the rapid development of Internet of Things technology, explosive end-user business demands have brought great challenges to 5G networks. In order to reduce the delay of content acquisition, while protecting user privacy and improving user experience, this paper proposes a content caching scheme based on federated learning in fog computing networks to reduce the content acquisition delay. Firstly, a Device-To-Device (D2D) collaborative fog computing network model is proposed. Users can obtain content from the user, fog node and cloud through D2D and wireless link; Secondly, the user builds a deep neural network model locally, trains the local model using the historical request data, and FN aggregates the local models to predict the global content popularity; At the same time, it provides users with personalized content recommendation list to improve cache hit rate. Finally, based on the real data set, the simulation results show that this scheme can effectively reduce the content acquisition delay and improve the cache hit rate.
    • Supplementary Material | Related Articles
    • An Ensemble Learning-Aided DDPG Resource Optimization Algorithm
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 29-36.
    • Abstract ( 280 )       
    • An integrated architecture of communication, computing and caching (3C) is constructed to solve the joint optimization problem of task scheduling and resource allocation. In order to coordinate network functions and dynamically allocate limited 3C resources, the deep reinforcement learning algorithm is adopted to obtain the maximum profit function of mobile virtual network operators by jointly considering the diversity of user request services and dynamic wireless channel conditions. Simulation results show that the resource allocation scheme based on DRL is superior to the other two comparison strategies. The DRL algorithm combined with ensemble learning has a faster output speed and higher cost performance.
    • Supplementary Material | Related Articles
    • Fog Node Contribution Degree Based Task Offloading Algorithm for Heterogeneous Cellular Network
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 37-42.
    • Abstract ( 203 )       
    • Fog computing extends the cloud-based service to the network edge and it can be deployed in various scenarios. Aiming to solve the problem of fog computation task offloading in dense heterogeneous cellular network, and make full and reasonable utilization of the computing resources of all fog nodes, this paper proposes a computation task offloading algorithm. Firstly, the feasibility, fairness and stability of fog node cooperation are modeled and designed. Secondly, the contribution degree and contribution ratio coefficient of cooperation are defined. Combined with the threshold of the remaining computing capacity and the threshold of the cooperative contribution degree of the fog nodes, a cooperative fog node selection algorithm is proposed. Finally, an optimization problem is proposed to minimize the weighted sum of the task execution energy consumption and the user's payment cost under the constraint of the maximum tolerable delay of the task, and the optimal unloading decision is obtained by combining the external penalty function method and Powell (direction acceleration) method. Simulation results show that the proposed algorithm can effectively reduce the total cost of dense heterogeneous cellular network, compared with some of the algorithms studied in this paper.
    • Supplementary Material | Related Articles
    • Predictive Offloading Decision Algorithm for High Mobility Vehicular Network Scenarios
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 43-49.
    • Abstract ( 263 )       
    • Aiming at the problem of high failure rate of task offloading caused by frequent switching between different edge servers of highly mobile vehicles in the scenario of Internet of vehicles, a novel predictive offloading decision algorithm was proposed. Firstly, the local server,edge server and cloud server computing models were constructed, and the offloading mode of tasks was predetermined based on the constraints of the size of computing tasks, maximum latency tolerance and server resources. Secondly, aiming at the offloading mode of edge servers, a vehicle location prediction model was constructed by using long short-term memory network to generate edge servers that could be used for offloading. Finally, the improved ant colony algorithm is used to realize the optimal offloading task allocation among multi-edge servers. Simulation results show that the proposed algorithm improves task completion rate and resource utilization rate.
    • Supplementary Material | Related Articles
    • Research on Resource Allocation and Task Offloading Joint Optimization for Mobile Edge Computing in Ultra-Dense Networks
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 50-56.
    • Abstract ( 411 )       
    • Mobile Edge Computing (MEC) is an emerging computational paradigm, it can enhance the computing capacity of the device terminals, reduce the delay and energy consumption greatly, and extend battery life while ensuring user's service experience. Ultra-dense network (UDN) is regarded as the key technology of 5G wireless communication, it reduces system computing costs and improves network throughput to provide low-latency computing communication services. Aiming at MEC combine UDN, the ultra-dense deployment of networks infrastructures causes channel interference and both time delay and energy consumption is considered, the joint optimization problem of resource allocation and task offloading is studied, in order to minimize the total system cost including time delay and energy consumption. Due to coupling of decision variables, the original problem is decomposed into resource optimization problem and task offloading problem. A CF-APSO algorithm is proposed to obtain the suboptimal solution and the effectiveness is validated. The results show that, in ultra-dense deployment environment of micro base stations, compared with other three algorithms, the algorithm proposed can reduce the delay and energy consumption and improve system performance effectively.
    • Supplementary Material | Related Articles
    • Reaserch of On-Board Edge DNN Inference Strategies For LEO Satellite Networks
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 57-63.
    • Abstract ( 349 )       
    • With the improvement of on-orbit computing capability of low-orbit satellites and the surge in demand for artificial intelligence services such as target detection and satellite reconnaissance, deep neural network (DNN) has become the first choice for realizing intelligent services with its unique model structure and efficient learning. In order to solve the problems of limited resources and difficult communication caused by the satellite always moving at high speed, small size and isomerization, it has become an inevitable trend to realize edge calculation by low-orbit satellite and distributed task inference by DNN. Firstly, directed acyclic graph (DAG) is used to explore the structure of DNN model, and the distributed DNN inference problem in low orbit satellite network is studied. Then, a quantum evolution algorithm (QEA) based on excitation function and processing delay is designed to realize the optimal decision of sampling rate setting and task offloading. Finally, the simulation results show that the performance of quantum evolution algorithm based on excitation function and processing delay is better than that of traditional methods.
    • Supplementary Material | Related Articles
    • Performance and optimization of NOMA-MEC networks with hardware impairments
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 64-70.
    • Abstract ( 315 )       
    • Existing studies on mobile edge computing (MEC) networks based on non-orthogonal multiple access (NOMA) have ignored hardware impairments in the transceiver, and thus the impacts of hardware impairments on the NOMA-MEC networks is unclear in terms of the computational performance. A NOMA-MEC network consisting of two edge users and an MEC server is considered, and the probability of successful computation, for given hardware impairment parameters, is derived by means of stochastic processes. Based on this, the task successful computation probability is maximized by jointly optimizing the partial offloading strategy and the user transmit power, and the optimal solution is derived into the closed form. The simulations show that 1) hardware impairment causes a bottleneck in the probability of successful computation, i.e. when the NOMA user transmit power is infinite, the probability of successful computation tends to be a constant and that is less than one; 2) the probability of successful computation of the user task decreases as the impairment factor increases.
    • Supplementary Material | Related Articles
    • Research on Fairness Optimization in a Backscatter Assisted Wirelessly Powered NOMA-MEC Network
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 71-77.
    • Abstract ( 511 )       
    • In order to solve the problem of energy shortage and limited computing power of Internet of Things (IoT) nodes, In this paper, non-orthogonal Multiple Access (NOMA) technology is introduced into a backscatter communication (BackCom) assisted wireless powered mobile edge computing (MEC) network to construct a BackCom assisted wireless powered NOMA-MEC network.This network can not only take advantage of the combination of BackCom and active transmission, but also take advantage of the technical advantages of NOMA to further improve the spectrum efficiency of the network. In order to ensure the fairness of the computation bits at IoT nodes, based on the max-min criterion, this paper aims to maximize the computation bits at the worst IoT nodes. By jointly optimizing the energy harvesting time, backscattering time, backscattering coefficient, NOMA transmission time, NOMA transmit power, local computation frequency and local computation time of IoT nodes, a non-convex optimization problem is formulated to satisfy the constraints of energy causality and computation power. Then, the optimal local computation time of each node is obtained by proof by contradiction and substituted into the original problem. Then, the backscattering coefficient and backscattering time of IoT nodes are decoupled by means of the auxiliary variable method. As for the co-channel interference introduced by the NOMA technology, we replace the complex expression of the offloading bits with its low bound to achieve a more simple sub-problem and convert the sub-problem to a convex one by means of the variable substitution method. Finally, by optimizing the above sub-problem,we propose a corresponding iterative algorithm to achieve the optimal solution. The simulation results verify the fast convergence and accuracy of the proposed algorithm, and show that compared with the existing schemes, the proposed scheme can better guarantee the fairness of the computation bits among the IoT nodes in the investigated network.
    • Supplementary Material | Related Articles
    • Cooperative Offloading and Resource Allocation Algorithm of Multi-edge Nodes in VEC
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 78-83.
    • Abstract ( 240 )       
    • Aiming at the problems of high computing cost of tasks and unbalanced load of edge nodes in vehicular edge computing (VEC), combined software defined network (SDN) with multi-edge computing, a three-layer software defined vehicular edge computing model of "end-multi-edge-cloud" (SDVEC) was constructed, and a multi-edge nodes cooperative offloading and resource allocation algorithm (MCORA-KDQN) was proposed. The SDN controller obtained network information from the global perspective, and uniformly scheduled task offloading and resource allocation. In the algorithm, the improved K-Means algorithm was adopted to divide the task into local cluster, edge nodes cluster and cloud server cluster respectively, in order to determine the initial offloading decision of the task, and then the deep q network (DQN) algorithm was used to obtain the optimal offloading decision, offloading proportion and resource allocation strategy of the task in the edge nodes cluster. The simulation results show that compared with the comparison algorithm, the proposed algorithm reduces the task computing cost by at least 18.6%, improves the resource utilization rate of edge nodes by at least 22.9%, and realizes the load balance among edge nodes.
    • Supplementary Material | Related Articles
    • Stagewise Weakly Supervised Satellite Imagery Road Extraction Based on Road Centerline Scribbles
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 84-90.
    • Abstract ( 371 )       
    • Extracting roads from satellite images through semantic segmentation algorithm has become the mainstream solution for RS-based road monitoring tasks. However, due to complex features and changeable textures of roads in satellite imagery which derive from various geographical environments and the high cost of pixel-level road labeling, it is unaffordable to acquire a substantial dataset with pixel-level road annotation to train semantic segmentation models. To solve the above problems, a stagewise weakly supervised road extraction algorithm based on road centerline scribbles is proposed. The feature of road centerline scribbles is learned in a weakly supervised way, and the road segmentation model is trained by stages. In addition, the pseudo mask update strategy and the hybrid training strategy are proposed, and the loss functions for road foreground and road background are designed. The results show that compared with other weak supervision methods based on road centerline, the proposed algorithm achieves superior performance in road segmentation task. Ablation studies are also conducted to verify the effectiveness of the proposed training strategy.
    • Supplementary Material | Related Articles
    • 3D Segmentation of Brain Tumor MRI Image based on RAPNet
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 91-97.
    • Abstract ( 283 )       
    • In view of the weak ability of multi-scale lesion processing ability of the traditional deep convolutional neural networks for fully automatic brain tumor segmentation, the improved recurrent residual convolutional units are used to build the backbone for feature learning to improve the spatial relevance of feature learning and alleviate the network degradation and gradient dispersion caused by too complex network model. The hierarchical feature pyramid is constructed by 3D atrous-convolution with different expansion rates and cross model attention mechanism, combined with context features to improve the recognition ability of the overall model for tumors of different sizes. Combined with multi-layer feature map, the tumor is predicted to obtain the final segmentation result. Abundant ablation experiments carried on BraTS 2019 datasets demonstrate that the average DSC values of WT, TC and ET were 0.897, 0.852 and 0.823 respectively. Compared with the existing efficient brain tumor segmentation methods, RAPNet has better effect in learning the multi-scale features of lesions.
    • Supplementary Material | Related Articles
    • Ultrasonic Phased Array Focusing Scanning Imaging Method Based on Phase Shift Migration
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 98-103.
    • Abstract ( 408 )       
    • With the rapid development of the manufacturing industry and the demand for high-precision quality inspection, ultrasonic phased arrays have become more and more widely used due to the advantages of fast, high-resolution real-time imaging. In this paper, an ultrasonic phased array combined processing imaging method is proposed based on the phase migration imaging algorithm to improve the imaging quality of the array scanning. Firstly, the simulation model is established by finite element software, and the image of the imaging area is reconstructed by using phase migration algorithm and two-dimensional inverse Fourier transform, which is compared with the traditional phased array imaging. In the experiment, the parallel focusing scan signal and the phased array single-transmit and single-receive time domain signal are extracted, and the obtained signal is processed by peak extraction, envelope, difference, etc. to improve the signal-to-noise ratio; finally, the processed signal is processed. De-delay weighted superposition is used to construct the array scan data matrix, The phase shift imaging results of phased array focus scanning are quantitatively compared with the phased array time-domain synthetic aperture imaging results and the phase shift imaging results of single transmitting and single receiving signal of array element.The results show that the phase shift imaging algorithm based on the phased array parallel focus scanning data can improve the imaging quality of the array scanning. Compared with the traditional Hilbert envelope extraction, the envelope signal extracted by this method is smoother and more stable, which is conducive to improving the imaging effect; In the process of detecting densely arranged porous carbon steel test blocks, the phased array focus scan phase shift migration imaging results compared with the imaging results of single shot single receive phase transfer algorithm with the same number of array elements and phased array time-domain synthetic aperture imaging results, the image signal-to-noise ratio of 11 transverse hole defects is increased by 12.98dB and 18.85dB on average, and the error rate of defect area is reduced by 3.74% and 4.05% on average.
    • Supplementary Material | Related Articles
    • Traditional clothing image classification algorithm based on multi-layer discriminant dictionary learning
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 104-108.
    • Abstract ( 280 )       
    • Multi-layer discriminant dictionary learning has achieved remarkable results in image classification. However, the existing multi-layer discriminant dictionary learning mostly uses the alternating direction multiplier method to update the dictionary. When the image content is rich and contains multiple tags, it performs poorly in multi tag classification. The two-layer discriminant dictionary learning structure composed of recursive least square method and decorrelation enhancement reconstruction coefficient algorithm is more suitable for image multi label classification. The data is sparse decomposed many times through multi-layer discriminant dictionary learning, and the feature vectors obtained by sparse decomposition are classified by linear classifier in the last layer. The experimental results on the dress pattern data set of the Ming and Qing Dynasties verify the superiority of this algorithm. Compared with the latest existing algorithm, the classification accuracy reaches 82.17%, which achieves the best effect in similar algorithms.
    • Supplementary Material | Related Articles
    • Simulation Reconstruction of Non-Newtonian Fluids for Monocular Video
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 109-115.
    • Abstract ( 599 )       
    • In fluid simulation, the parameters of the constitutive model are difficult to predict accurately, which leads to the inconsistency between the simulation results and the visual effects of real videos. To solve this problem, a real video-oriented non-Newtonian fluid simulation reconstruction method is proposed. The model training phase takes non-Newtonian fluid simulation videos as input and learns the best low-dimensional latent space representation of a single frame of fluid simulation images. Inter-frame prediction is then performed in this latent domain, employing a convolutional long-short-term memory network to predict latent vector representations of future frames. Finally, the reconstruction parameters of the constitutive model are predicted based on the frame-by-frame latent representation encoding and inter-frame temporal features. In the model verification stage, the real video of the non-Newtonian fluid is used as the input to predict the parameters of the fluid constitutive model, and realize the simulation and reconstruction of the non-Newtonian fluid based on the Cross model.The experimental results show that the video-oriented simulation reconstruction method can obtain the fluid flow phenomenon more consistent with the real video than the reconstruction method based on the rheometer measurement. The proposed method has higher pixel accuracy and pixel accuracy at different times, and has a visual effect that is more in line with the actual flow.
    • Supplementary Material | Related Articles
    • Language Identification method based on Fusion Feature MGCC
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 116-121. DOI:10.13190/j.jbupt.2021-322
    • Abstract ( 310 )       
    • Aiming at the problem that it is difficult for a single acoustic feature to effectively represent language information in a noisy environment, a language identification method is proposed by combining mel-scale frequency cepstral coefficients and gammatone frequency cepstral coefficients. Firstly, the mel-scale frequency cepstral coefficients and gammatone frequency cepstral coefficients of speech are extracted. Then, the two features are transformed by matrix space to obtain the mel-scale gammatone cepstral coefficients of fusion feature. Finally, the fusion feature is input into the deep bottleneck network, and the language identification performance of MGCC features is tested in 25 different noise environments. The experimental results show that the identification accuracy of the proposed method is much higher than that of the single acoustic feature and other fusion features under different noise and different signal noise ratios. The accuracy of language identification can reach 99.56% in the clean corpus, and can still reach more than 93% under -5dB signal noise ratio, which proves the effectiveness and robustness of the proposed method.
    • Supplementary Material | Related Articles
    • Language Identification Based on Gammatone-Scale Power-Normalized Coefficients Spectrograms
    • yubin yubinshao Da-Chun ZHOU
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 122-128.
    • Abstract ( 287 )       
    • Aiming at the low identification rate of language identification in noisy environment, a language identification method is proposed based on the Gammatone-scale power-normalized coefficients spectrograms, which are obtained by extracting coefficients as features based on the suppression of noise in power and the auditory features of the Gammatone filter-banks, and transformed into images as spectrograms. Then the dark channel prior algorithm and automatic color scale algorithm are applied to enhance and denoise the images. Finally, the residual neural network is used for training and identification. Experiments show that the identification rate of the proposed method is improved by 39.1%, 12.3%, 19.0%, 5.5%, 28.2% and 28.5% relative to the linear gray-scale spectrograms under the conditions of signal-to-noise ratio is 0dB and noise sources are white noise, volvo noise, pink noise, HF channel noise, babble noise and factory floor noise respectively. The identification rate under other signal-to-noise ratios is also improved.
    • Supplementary Material | Related Articles
    • Image Quality Evaluation Method Based on Human Visual System
    • Journal of Beijing University of Posts and Telecommunications. 2023, 46(2): 129-136.
    • Abstract ( 334 )       
    • An image quality evaluation method based on improved Weber local features is proposed. Firstly, the mechanism of image contrast recognition by human visual system is simulated. The improved gray optimization algorithm retains the best contrast of color image. Secondly, prewitt operator is used to calculate the gradient direction in the neighborhood. Calculate and sum the differential excitation values in the vertical and horizontal directions in the neighborhood to obtain the edge information of the image. Finally, use the support vector machine to train the one-dimensional feature data of the images in various databases to construct image quality evaluation model. Experiments show that the method has the advantages of higher accuracy, better applicability and strong prediction direction.
    • Supplementary Material | Related Articles