The conventional quantitative regression analysis method for processing millions of grid aerospace big data is not suitable for complex flow field environments. Under this background, machine learning models may be a promising toll that can be used to solve this problem. However, the existing machine learning models may not simultaneously have sufficient prediction accuracy, model interpretability, and big data processing capabilities. To solve this challenge, a new deep decision tree model is proposed. Based on the stacked deep forest model, the hidden features are extracted and utilized by adaptive multi-granularity scanning and self-growing cascade forests. Using aerospace big data for experiments, the results show that the proposed model is superior to the random forest, extreme gradient boosting, and light gradient boosting machine models in terms of prediction accuracy, generalization performance, and core function gain.
A convolutional neural network training algorithm based on federated learning is proposed for the hybrid beamforming for intelligent reflecting surface assisted communication in millimeter wave massive multiple input multiple output system. In multi-user communication system, the analog beamforming matrix and intelligent reflection matrix with the maximum sum rate are researched by exhaustive search algorithm, which is set codebooks are designed, and the exhaustive search algorithm is used to search the analog beamforming matrix and intelligent reflection matrix with the maximum sum rate are researched by exhaustive search algorithm, which is set as the training data label. Then, based on the federated learning framework, the convolutional neural network is used for local training to map channel matrix to analog beamforming and intelligent reflection matrixes. The simulation results verity the feasibility of convolutional neural network training based on federated learning. Meanwhile, by comparing the communication scene with or without or randomly intelligent reflection matrix, the proposed algorithm.
For high-speed mobile multiple-input multiple-output orthogonal frequency division multiplexing system, a low-complexity time-varying channel prediction method joint the back propagation(BP) neural network with basis expansion model is proposed. To reduce the computational complexity, the basis expansion model is employed to model the time varying channel, and the channel information at a future time is obtained by the offline training and online prediction of the channel base coefficient. During offline training, the proposed method first acquires the channel base coefficient by the received pilots. Then to obtain the channel prediction network model, the training sample is constructed and sent into the BP neural network for training. During the online prediction, based on the network model and historical base coefficient estimation obtained by the training, the proposed method can obtain the time domain channel at the future time. The simulation results show that the proposed method has lower computational complexity and higher prediction accuracy than the existing methods, which is suitable for the efficient acquisition of time-varying channel information in the future high-speed mobile environment.
In order to improve the computational power and efficiency of graph neural networks, the problems of large memory requirements and random memory access in the training process of graph neural networks are studied, and a high-performance graph neural network accelerator design based on heterogeneous architecture is proposed. The heterogeneous platform adopts the combination of central processor and field programmable gate array, which is mainly composed of a calculation module and a buffer module. Design different hardware architectures to implement hybrid computing modules ; the buffer module provides a buffer for the input node features and intermediate variables. Aiming at the two mixed execution modes with irregular and regular aggregation and update, the calculation module is improved, and the accelerator is optimized for data parallelism and redundancy removal. Experiments on the Ultra96-V2 hardware platform show that the designed graph neural network accelerator not only improves the system performance, but also significantly reduces the power consumption.
A medium access control protocol using dueling-deep double Q-network (dueling-DDQN) algorithm is proposed to maximize the system throughput in the rapidly changing wireless communication networks. The proposed protocol applies the q value calculation method of the dueling deep Q-network to calculate the q value of the deep double Q-network,which combines the advantages of dueling deep Q-network and deep double Q-network. Thus, it cannot only increase the calculation accuracy of the q value and the convergence performance, but also solve the problem of overestimation, that improves the overall performance and robustness of the system. The simulation results demonstrate that the proposed protocol coexisting with time division multiple access protocol and ALOHA protocol in wireless communication systems, is effective to reduce the convergence time and increase the total system throughput when comparing to the traditional deep Q-network medium access control(MAC) protocol.
When the deep neural network model is deployed on the terminal device, it faces the problem of insufficient computing capabilities and storage resources. Model pruning provides an effective model compression method, which can reduce the number of parameters and reduce the computational complexity while ensuring the accuracy of the model. However, the traditional pruning methodsmostly rely on prior knowledge to set pruning rate and pruning standard. They ignore the pruning sensitivity and parameter distribution difference of different layers of deep model, and lack fine-grained optimization. To solve this problem, a filter pruning scheme based on reinforcement learning is proposed to minimize the precision loss of the model after pruning while satisfying the target sparsity. In the proposed scheme, the parameterized deep Q-networks algorithm is used to solve the constructed nonlinear optimization problem with mixed variables. Experimental results show that the proposed scheme can select suitable pruning standard and pruning rate for each layer, and reduce the precision loss of the model after pruning.
A deterministic cross-domain transmission architecture and a deep reinforcement learning(DRL)-based scheduling algorithm are proposed to address the cross-domain transmission problem in time-sensitive networks. The deterministic cross-domain transmission architecture is a wide-area deterministic networking architecture that integrates cyclic queuing forwarding with deterministic Internet Protocol. By defining a cross-domain cycle mapping function, a time-slot-based deterministic transmission channel is established to ensure bounded transmission delay. DRL states, actions, and rewards are detined in the DRL-based time-slot path joint online scheduling algorithm, and the scheduling target is to maximize the total earning value of all scheduled flows with different earning values. Experimental results demonstrate that the proposed cross-domain transmission architecture and scheduling algorithm can ensure end-to-end deterministic transmission, significantly improve the earning value of traffic scheduling, and guarantee the transmission of important traffic.
Aiming at the scenario where unmanned aerial vehicles assist users on the ground to carry out downlink communication, the optimization problem of unmanned aerial vehicle trajectory constraint, power constraint and user access scheduling is established with the objective of maximizing the minimum average reachable rate of users. Considering the coupling of the constraints and the non-convexity of the optimization problem, the constructed optimization problem is modeled as a Markov decision process, and a trajectory planning and power control algorithm of unmanned aerial vehicle-based on deep deterministic policy gradient (DDPG) is proposed. Simulation results show that the proposed algorithm can effectively improve the minimum average achievable rate of users.
During the federated learning, multiple data owners can jointly train a high-quality model, which effectively solves the problem of data silos and protects the privacy of the user data. However, the current federated learning has problems such as model leakage, unverifiable training results, high user computing and communication costs. To solve the above problems, a privacy-enhanced and verifiable security aggregation scheme for federated learning is proposed, which simultaneously realizes the privacy protection of user data and model parameters, and the verifiability of training results. The proposed scheme greatly reduces the computational and communication overhead of users. The scheme uses the homomorphic encryption algorithm to process floating-point operations, and verifies the correctness of the aggregation results based on a linear homomorphic hash function. Even if some users are offline, the final aggregation results will not be affected. The experimental results show that this scheme has less computational overhead and effectively improves the test performance of the trained model.
For the problem of secure storage and credible sharing of massive data, a data sharing access control method of cloud-chain collaboration is proposed. Firstly, the system model based on blockchain and cloud storage is constructed. The attribute encryption algorithm based on ciphertext policy is improved, and the user-attribute many-to-many association policy is established to realize the access control of data ciphertext and solve the problem of honest and curious cloud servers. Then, an effective public key that supports both fine-grained access control and multi-keyword search of encrypted data is proposed. The proposed method is proved to be secure under the assumptions of deterministic Diffie-Hellman and stochastic Oracle models, and the experimental results show that the proposed method is more efficient in the index generation and keyword matching stages.
To solve the issue that the low fault diagnosis accuracy of wind turbine gearbox, a gear fault diagnosis method based on logistic regression and genetic algorithm optimized generative adversarial networks (GAN) is proposed. The GAN model is optimized based on logistic regression and genetic algorithm. First, encoding the input signal to quantization coding by wheel selection on an acer for equipotential crossing. Then, the refactoring characterization vector is reconstructed by replacing the allelic coding string with least square variation, and the convolution network is input for the second iteration. Finally, the method builds auxiliary classifier characterization of decision boundary logistic regression. The discriminator achieves classification and diagnosis based on regression curves. Results show that the fault diagnosis accuracy of this method is up to 99.72%, which proves that the method can enhance the sample data and improve the diagnosis.
Existing gait recognition methods fail to make full use of frame-level feature information and spatio-temporal feature information. To solve this issue, a novel frame-temporal gait network based on frame-spatio-temporal double branch network is proposed, which includes two branches, the one is frame-level feature extraction, and another is spatial-temporal feature extraction. Then, a temporal integration module is proposed to bridge two branches, which enables the information of frame-level feature extraction branch to be integrated with the spatial-temporal feature extraction branch many times and enhances the capability of feature representation. The proposed gait recognition network experiments on popular gait recognition datasets CASIA-B and OU-MVLP. The results of experiments verify the effectiveness of the proposed method under normal walking and complex conditions.
To solve the issues of lack of processing nonlinear data and noise when using subspace clustering algorithms in image segmentation tasks, an image segmentation algorithm based on nonconvex low-rank subspace clustering is proposed. First, the adaptive morphological reconstruction seed segmentation method is used to perform the point-by-point maximum operation on the gradient image. The original image is pre-segmented into superpixel images of different area sizes, which remedies the over-segmentation defect of superpixel segmentation methods. Then, the color features of superpixel block are extracted and stacked into a data matrix, and are further input into the multi-kernel subspace clustering algorithm.; Next, the coefficient matrix is solved according to the subspace representation, and the affinity matrix is constructed. Finally, the affinity matrix is input to the spectral clustering to obtain the final segmentation results. The results of comparison experiments on public data sets show that the proposed method achieves the best clustering performance and segmentation effect.
Broadband metamaterial absorber designed by traditional methods requires a lot of computational resources and time. To solve this issue, the topology optimization method is proposed to design broadband metamaterial absorbers. The dynamic adjustment of bandwidth can be achieved by changing the surface topology. The discrete coding method is used to encode the topology structure. The binary particle swarm optimization algorithm combining dynamic weights and Gaussian errors is used to optimize the topology to achieve high absorption in any frequency band of C-Ku. The simulation results show that the absorption of the two metamaterial absorbers designed in this paper is higher than 90% at 8.2~16.6 GHz and 7~13 GHz, respectively. Especially in the X-band, the average absorption of one of the metamaterial absorbers is 96.44%. The proposed algorithm solves the problem of premature and weak local search ability in the later stage, which can be applied to design metamaterial absorbers effectively. Compared with the traditional methods, the topology optimization method has the advantages of on-demand design and design process without intervention, it has a wide range of application prospects in related fields.
Based on conformal geometry algebra, a novel geometric modeling and computing method for the displacement analysis of a nine-link Baranov truss is proposed. Under the frame of conformal geometric algebra, in terms of the representations of the basic geometric elements such as point, sphere, surface, and point pair and the area sign of the triangle, two constraint equations are formulated by the intersection, dissection, and dual operation. Then, a 54th-degree univariate equation without extraneous roots and root loss is derived by one-step resultant elimination. The advantage of the proposed method lies in that the derivation of two constraint equations are free of coordinate and the elimination procedure is simplified due to the reduction of the number of constraint equations. Finally, a numerical example is given to validate the correctness of the method. The proposed method provides a new sight for the theoretical solution of the displacement analysis for other nine-link Baranov trusses.
Based on the idea of communication and control integration, the cooperative flying unmanned aerial vehicle networks are studied, and the design method of leader-follower cooperative control and the analysis method of communication capacity is proposed. First, the method selects and controls a leader to guide the whole network flying, and all followers to perform the motion collaboration via distributed interaction when merely some followers to obtain the leader state information. The process is simple for analysis and application. The cooperative flying unmanned aerial vehicle network analyzes the communication capacity of network links based on the Rice fading channel, and derives a lower bound on the link communication capacity during the convergence process of cooperative flying motion. The simulation results show that the formation movements are controlled according to the leader-follower cooperative control law of the network, and there is a clear non-zero lower bound for the link communication capacity between unmanned aerial vehicles, which is consistent with the theoretical analysis.
In complex environments, the point cloud obtained directly by RGB-D camera is often affected by complex external scenes, such as light, transparent objects, shadows, etc., resulting in large-scale loss of point cloud, and even being unable to represent the real 3D features of the object. Incomplete point cloud affects many important applications in computer vision fields, such as object detection, path planning, etc. To solve this issue, a method is proposed to complete the point cloud using multimodal data such as RGB image semantic information. This method first uses a semantic segmentation network of RGB image, which is based on the "encoder-decoder" structure to obtain the semantic segmentation results of RGB image, and then takes RGB image, semantic segmentation results, and incomplete sparse depth map as the input of algorithm, and outputs completed point cloud. After a large number of experiments in complex scenes, the experiments show that this method is effective in completion effect and operation efficiency.
To solve the issue of mobile terminal users’ trajectory privacy security and individualized needs in location-based services, a k anonymous trajectory privacy protection scheme based on individualized differential privacy is proposed. The proposed scheme first allocates different privacy budgets according to individual differences, then adopts differential privacy technology to repeatedly add laplacian noise to the user trajectory to generate 2k noise trajectories. After that, it uses the trajectory similarity measurement method to determine the optimal k-1 noise users, which form k anonymous user groups in combination with real users, and then randomly selects a proxy user to replace the real user to perform location-based services requests to realize the privacy protection of user identities and trajectories. Security analysis shows that the scheme satisfies anonymity such security features as anonymity, unforgeability, and anti-counterfeiting attack. Simulation results show that the scheme not only has an obvious advantage in privacy protection but also has high execution efficiency.
To improve the accuracy of the recommendation algorithm under privacy protection, a logic regression matrix factorization recommendation algorithm is proposed for differential privacy protection. The algorithm first converts the matrix decomposition of implicit data into a classification problem to model it in a probabilistic way. Then, the sigmoid function is used for non-linearly transformation of the prediction score, and the original matrix decomposition problem is converted into two successive user latent factors and item latent factor optimization problem. After that, random noise perturbation is added to the objective function to make the algorithm satisfies differential privacy protection. Experiments are carried out on data sets movielens100k, movielens1m, and Yahoo Music. Compared with the existing relevant algorithms, the algorithm improves the F1 value index by 9.29%, 7.40% and 3.61% respectively. Theoretical analysis and experimental results show that the algorithm can effectively guarantee the accuracy of recommendation results while realizing user implicit feedback data protection, and has good application value.