Ultra dense networks(UDN) has been dealt as a core technology in the fifth generation of communication for its excellent performance on enhancing the network throughput and spectrum efficiency. Densely deployed nodes have brought new challenges to the communication systems in the aspects of interference suppression, energy consumption,mobility management and so on.The new challenges in turn have promoted a considerable progress in resource allocation, network selection strategy and network modeling techniquesin UDN study. We first introduce the basic framework of the UDN and features.Moreover, we focus on classification of the UDN technology research in the latest works, and finally the technology trends were discussed for a more comprehensive and clear out view in this field.
A bucket partition based privacy-preserving top-k query processing (BPTQ) in two-tiered wireless sensor networks was proposed. BPTQ can protect the privacy of sensing data during the storage, communication and query processing by introducing bucket partitioning scheme and encryption technique. Analysis and experiments show that BPTQ can preserve the privacy of the sensing data and query result, and it is more efficient in energy consumption than the existing work.
A multi-hop route selection of device-to-device (D2D) communication in cellular and Ad hoc hybrid networks was proposed. Interference model and channel model of multi-hop D2D communication were given, and the interference range from cellular user equipment (UE) to D2D UE was considered as well. Based on the model, the formula of outage probability in D2D communication was analyzed and obtained. The formula was applied into the multi-hop route selection, and then one solution of multi-hop route selection was proposed. Simulation shows the proposed solution can improve the capacity.
To get better effect in the processing of incomplete information system the tolerance relation and the valued tolerance relation in the extension rough set model was studied and a new valued method was proposed. The new valued tolerance relation is affected by correlation between the known attributes and the unknown attributes. The correlation and repellency between the different objects' attributes values were considered and the general correlation factor was defined based on the generalized correlativity in universal logic. The new valued tolerance similarity between the individual objects is related to this factor and a particular computing method is given. The characters of the new valued tolerance relation were proved reasonably and an instance was given to illustrate its usage at last.
English entity linking tasks play an important role in construction of semantic network and big knowledge base. An entity linking method based on local information and learning to rank algorithm was proposed. Firstly, the context information is well used for expanding mentions' name and retrieving candidate entities from Wikipedia. Secondly, kinds of features are extracted between mentions and candidates and also the ListNet algorithm was used to rank the candidate entities to choose the most related entity as the linked objects. Finally, the NIL entities was clustered by clustering method. The method achieved 0.660 F value on KBP 2013 Entity Linking dataset, it performs 0.092 better than the median F value of all participated teams in KBP 2013 entity linking task and also performs 0.162 better than BUPTTeam 2013, which is the baseline comparison system in the experiment.
According to the mobility, sociality and complexity of persons, how to achieve crowdsourcing allocation was noticed based on credible interactions between users in the article. First, a new credible crowdsourcing assignment model was proposed based on social relationships cognition and community detection. Then, how to reasonably assess user crowdsourcing preferences, service quality factor, link reliability factor, and region heat factor was defined, and a crowdsourcing algorithm based on analytic hierarchy process(AHP) theory was given. Simulation was conducted to prove the correctness and effectiveness of method.
To rapidly diagnose and locate link failure in software-defined network (SDN), an OpenFlow based link failure diagnosis mechanism was proposed. Making full use of SDN's feature that the control plane decouple from the data plane, the mechanism collects information from every node in the network using the OpenFlow protocol, and then analyzes the information for topology management, link packet loss, link bandwidth and link congestion to diagnose link failure. Simulation proves that the mechanism can accurately measure link performance metrics; besides, with topology information, link failure can be precisely located.
For purpose of improving cyber security, a honeypot named CHoney used to detect attacks against Cisco routers was designed and implemented. CHoney uses function monitoring and data tracking to collect information about attackers. It sets up alarm rules based on different sensitive operations of attackers. Experiment shows that CHoney can promptly capture attacks against Cisco routers, support analysis of attack process and extraction of attack code. CHoney is effective in detecting the attacks against cisco routers, and improves the cyber security through experiment.
In order to characterize local features of network redundant traffic on small-time scale more accurately, a new Cauchy-Laplace multifractal wavelet model was proposed. An algorithm for estimating parameters of wavelets was also put forward. A joint distribution function was adopted to describe local features, i.e., Cauchy and Laplace distributions were used to obtain the parameter multiply factors for heavy-tailed and spike features, respectively. A threshold for ratio of wavelets to scaling parameters, which decides how these two distributions affected redundant traffic modeling, was achieved by probability comparison. Experiments show that the proposed model can well characterize small-time scale multifractal features of network redundant traffic.
Lossy compression of hyperspectral image based on vector quantization algorithms can achieve a high compression ratio, but it is of time complexity and great distortion. This article proposed a new fuzzy C-means clustering (FCM) algorithm based on simulated annealing. Firstly, the dimensions were reduced by using the algorithm of adaptive band combination dimensional reduction (ABC), then the number of clusters with the elbow was determined. FCM was combined with simulated annealing, and found optimal result quickly, then recovered dimensions. We got optimization coding by deblurring U. Through this approach, the efficiency has been improved and the distortions have been reduced greatly.
A new coverage measure model in dense wireless sensor network was proposed in Rayleigh fading channel, which is more suitable to the actual transmit environment compared to existing sensing models. The spatial distribution of sensors are modeled as a Poisson point process, and the Rayleigh fading channel is adopted as the propagation channel. Through the set intersection algorithm from integral geometry, the coverage measure for k-coverage was achieved, the probability for k-coverage and the node density guaranteeing k-coverage were derived. Simulations demonstrate the influence of channel parameters and validate the correctness of the coverage measure model.
An aggregated multicast scheme in the reconfigurable networks was presented. The aggregated multicast can reduce the states effectively and improve the scalability, but it brings out the waste of the bandwidth. The network coding can improve the transmission performance of multicast. In order to combine the advantages of both technologies, a multicast aggregation algorithm was proposed based on network coding in the reconfigurable. Benefited from centralized control logic and global network view in reconfigurable networks, it can reduce multicast states and waste of the bandwidth. Simulation for the random network topology model shows that, compared with the traditional multicast aggregation algorithm, it can achieve a better balance between the state and the multicast bandwidth waste.
To cope with the problem captured nodes causing the deviation, the situation data fusion mechanism based on trust was presented. It makes trust awareness rule based on historical trust or data correlation, and guarantees consistency in three stages. Firstly, in event detection, after collecting data, it detects events through most trust majority rule to enhance accuracy. Secondly, during data fusion, it uses data filter rule to improve reliability. Finally, in consistency detection, it utilizes the consistency detection rule to reduce communication traffic, simultaneously judging the consistence of centers. The new mechanism can suppress misrepresentation and enhance the performance when abnormity is more than normal ones. It can also reduce the deviation between real value and fusion data. Simulation verifies the reliability and validity.
A scheme decentralizing multi-authority attribute based encryption DMA-ABE was proposed. In this scheme, the multiple attribute authority was used to issue users' keys without any coordination of central authority. Any linear secret sharing schemes access structure was supported, which is efficient, flexibility and fine-grained. The proxy re-encryption was used to realize on-demand attribute revocation and authorization. A security analysis was given to verify the scheme is secure against chosen plain-text attack.
Considering the problem of event detection in censoring sensor networks, an optimal fusion rule based on the three-level quantization was derived for the mode of Rician fading channels. However, the optimal fusion rule requires instantaneous channel state information which may be too costly for resource-constrained sensor networks. A sub-optimal alternative with the knowledge of channel statistics was proposed. To further simplify the fusion rules, three simple approximations were also presented. Simulations show that, by the proposed censoring strategy, the goal of energy saving and performance improvement can be achieved. In addition, the performance analysis and comparision between different fusion rules show that in resource-constrained sensor networks, a tradeoff between detection performance and the requirement on system resource should be considered when choosing the fusion rules.
For the efficiency and security problems of large secret sharing, a verifiable multi-use dynamic thres hold large secret sharing scheme was put forward. To improve the efficiency, the large secret is divided and represented as a matrix over smaller finite field, and the two-variable one-way function is also utilized; to enhance security, the thres hold modification method is slightly expanded and the elliptic curve discrete logarithm problem is employed. By analysis, this new scheme not only is high-efficiency, but also can prevent dishonest participants from cheating. Meanwhile, the secret shadows can always be kept secret and need not to be renewed in the process of reconstruction. Especially, when the mutual trust varies or the number of the participants belonging to an organization fluctuates, the threshold value will be adjusted by at least t credible participants in time.
A new access point placement optimization approach based on lowest positioning error bound in Wi-Fi fingerprint based localization was proposed. Analysis towards the theoretical relationship between positioning error and various signal distributions by using Fisher information matrix was presented. The target environment was divided into several subareas based on signal distributions. And then, the average error bound corresponding to the target environment was derived out. Thirdly, the heuristic simulated annealing algorithm towards the lowest average error bound to optimize access point locations was adopted. Simulation demonstrates that the proposed access point placement optimization approach effectively improves the precision of Wi-Fi fingerprint based localization.
Utilizing the cyclic prefix characteristic of orthogonal frequency division multiplexing(OFDM) signal, this paper optimizes the system weight vector with the improved maximum deflection coefficient, proposes a multiuser cooperation detection algorithm, and obtains optimized weight vector expression. Theoretical analysis and simulation results demonstrate the remarkable improvement of proposed algorithm on detection performance compared with the classical cooperation detection algorithm.
To solve the problem that the coupled congestion control mechanism of multipath transmission control protocol (MPTCP) did not consider the impact of loss rate to the throughput and the traditional static path selection methods did decrease the robustness of parallel data transmission, a dynamic MPTCP path selection method with improving throughput as an optimization target based on Markov decision processes was proposed. This method takes into account the impact of both round trip time and packet loss rate to throughput, chooses different paths based on optimal policy to transfer data but not reduce the number of paths so that it promotes the throughput without reducing the robustness of the parallel data transmission. Simulation shows that the proposed path selection method balances the data stream more effectively and improves the throughput of MPTCP.
An algebraic solution for five precision points path synthesis of Stephenson-Ⅲ planar six-bar linkage was presented. The Stephenson-Ⅲ planar six-bar linkage is decomposed into two parts: a dyad and a four-bar linkage. To synthesize the two parts, the dyad first then the four-bar linkage, the kinematic constraint equations are formulated based on displacement matrix. The equations are solved with the Groebner-Sylvester (GS) hybrid approach in which a high degree univariate equation together with all its closed form solution is obtained finally. A numerical example was provided to validate the algorithm and the solutions are verified by Solidworks and SAM. The method given in this article can also be used to solve other types of synthesis problem concerning planar six-bar linkage.
The target of network control in datacenter is to satisfy the requirement of network resources of tenants and meet the demand of network management. A method of edge software-defined control framework supporting efficient management in data center network(DCN) was proposed named FRINGE. FRINGE builds up a software-defined networking(SDN) management domain at the edge of datacenter networks, exploiting configurable network devices to improve the efficiency of network management. The article analyzed the feasibility of implementation of FRINGE by updating the software of top-of-rack switches to support OpenFlow. It illustrates the troubleshooting in network management to present the enhanced effectiveness of the framework. The probing overhead of FRINGE is two orders of magnitude less than the previous method.
Research on heterogeneous cellular networks with renewable energy supply has drawn much attention in the green wireless networks research field. In the heterogeneous cellular networks with renewable energy supply, a topology potential based user association algorithm is proposed, which can give a tradeoff between energy balancing and load balancing among base stations (BSs). Firstly, the topology potential of users and BSs is defined, which takes the user traffic load, channel capacity between users and BSs, the available renewable energy of BSs into consideration to present the attraction of BSs to users. Then, the BS utility is defined as the sum of the topology potential value of all the associated users. The user association optimization problem for BS utility fairness is modeled, which is solved by the proposed iteration method. Accordingly, the user association is performed by information exchange and iteration between users and BSs, which balances the energy and traffic among BSs. Simulation results show that, the proposed algorithm can achieve load balancing while make the best of renewable energy harvested by BSs, i.e., the use of renewable energy among is balanced.
In order to effectively improve the users' experience for location social networks, a model of personalized location recommendation service was proposed. Considering the users' check-in behavior features, the users' characteristics and semantic features of interested location point, this model combines the ant colony algorithm with the improved hybrid collaborative filtering algorithm to improve the quality and efficiency of the individual location recommendation. Experiments show that, the recall, accuracy and average absolute error value of the location recommendation model proposed in this article is superior to the existing methods.