We study the fundamental problem of distributed energy-aware network formation with mobile agents of limited computational power that have the capability to wirelessly transmit and receive energy in a peer-to-peer manner. Specifically, we design simple distributed protocols consisting of a small number of states and interaction rules for the construction of both arbitrary and binary trees. Further, we theoretically and experimentally evaluate a plethora of energy redistribution protocols that exploit different levels of knowledge in order to achieve desired energy distributions which require, for instance, that every agent has twice the energy of the agents of higher depth (according to the tree network). Our study shows that without using any knowledge about the network structure, such energy distributions cannot be achieved in a timely manner, which means that there might be high energy loss during the redistribution process. On the other hand, only a few extra bits of information seem to be enough to guarantee quick convergence to energy distributions that satisfy particular properties, yielding low energy loss.
The health and security of wireless devices are fast gaining importance, and these are vital for effective implementation of sensor networks and Internet of Things (IoT). Any device, wired or wireless, needs a power source, and the power consumed is a consequence of its usage and functionality. In this context, this paper proposes a methodology to detect anomalous behavior of wireless devices by monitoring their power consumption patterns. The proposed methodology utilizes Independent Component Analysis (ICA) to extract information from the current power consumption of the device and generates features of the state of the device by calculating the degree of similarity of the extracted information with the known normal behavior of the device. Then, Recursive Feature Elimination (RFE) is used to select features from the generated feature vector. Finally, Classification algorithms are used to classify and detect the anomalous behavior. We have validated the methodology by emulating anomalous behavior on smartphones through a custom designed app that runs in the background while the main app is being used. Validation results indicate that the proposed methodology can be used to identify even a sparsely active malware existence with very high accuracy. The proposed model has an accuracy of 88% for a malware active for 1% of the total time and accuracy of almost 100% for malware active for 12% of the time.
The position of an access point (AP) in a WiFi network has considerable influence on the performance of the network. In this work, we consider the problem of a WiFi AP self-positioning itself adaptively based on the network conditions to deliver improved network performance. Through extensive experimental evaluation, we show that there are indeed significant performance benefits to be attained by allowing the AP to move intelligently. We also rely on theoretical analysis, simulations, and experimental studies to show that the AP optimal location search problem can be split into two parts: a macro-search problem to minimize average path loss between AP and clients, and a micro-search problem to tackle real-time multipath fading effects. We then present Hermes, a self-positioning WiFi AP system that relies on a suite of algorithms to compute and then move to an optimal location within the network. Using a prototype implementation, we show that Hermes can perform up to 117% better than WiFi with no AP mobility, and up to 73% better than related work that allows for AP mobility.
This paper presents an application of Deep Neural Networks to vehicular sensor data. The first goal of this work is to produce artificial readings for two sensors that have missing values: CO2 and fuel consumption. A neural network was trained to produce these values, and it is able to capture the rough behavior of these two sensors, although it misses minor variations. To train a Multilayer Perceptron (MLP), we used data from the enviroCar project, which collects sensor readings from volunteers around the world. With the dataset containing values generated by the MLP, we investigated the effect of these observations in tracing routes focused on fuel efficiency over a graph based on the traffic network of Monchengladbach. Results show that the imputed values increase the estimated fuel consumption in an average of 15% and CO2 in 17& for all routes.
In this paper, we propose QoS based radio resource allocation, where multi modal users are capable to connect to multi-RATs in heterogeneous wireless access network (HWAN). Our optimization problem is based on system sum-rate maximization under the QoS constraint. We propose a joint radio resource allocation scheme where we use Lagrange duality and dual update method to find a solution for the optimal allocation of subcarrier and power in OFDMA system and optimal time share in WLAN system. Numerical results show that the overall sum-throughput of our proposed optimal resource allocation algorithm in HWAN for multi-RAT approach outperforms the single RAT approach that uses WLAN or OFDMA based system. Furthermore, we analyze the impact of minimum data rate requirement of mobile users on the convergence rate of our proposed algorithm. Matlab based simulation results show that convergence rate of our propose algorithm is independent of the minimum data rate requirement of users.
As the demand for mobile data traffic increases and cellular network capacity is reaching its theoretical limit, using the unlicensed spectrum is deemed unavoidable. The deployment of the unlicensed spectrum can be a beneficiary solution to increase the capacity of the network and to reduce the cost of the licensed bands that is required. However, the extension of a cellular network, such as Long-Term Evolution (LTE), over the unlicensed spectrum also entails the requirement of a reliable coexistence between other technologies that freely access the unlicensed bands and mainly Wi-Fi. To enable the equal sharing between the so called License Assisted Access (LAA) - LTE and Wi-Fi a number of mechanisms has been introduced by the regulations imposed by standardization organizations such as 3GPP. The process followed by these mechanisms mainly focuses on the Media Access Control (MAC) layer. However, link adaptation is another key factor that can enable high spectral efficiency, reduce the detrimental effect of non-synchronous communications, and increase the overall performance. To this end, in this paper we investigate the contribution of link adaptation in case of LAA-LTE and Wi-Fi coexistence. In particular, we propose a number of enhancements in Wi-Fi link adaptation to increase its adaptability both in terms of changing channel conditions and in terms of LAA-LTE coexistence. Our results show that LAA-LTE can equally and fairly share the channel with Wi-Fi and that the proposed improvements can significantly increase the Wi-Fi performance whether operating alone or coexisting with LAA-LTE. Finally, a detailed description of the fundamental differences in link adaptation between the two technologies is described shedding light to the performance difference noticed.
In this paper, we propose a new continuous verification platform on smart mobile devices. To this end, we integrate gesture-based features with interaction with social networking apps to verify user identities without minimum requirement for a password, pin code or biometric means. The continuous verification subsystem of this work proposes a novel two-step system for verification of users. The subsystem works by having two accurate models working as a primary and backup; when the primary fails the backup takes over to confirm or deny the conclusion of the primary model. The false acceptance rate (FAR) and false rejection rate (FRR) achieved under the proposed two-step system are shown to be 2.54% and 1.98% respectively, compared to the FAR and FRR of single-step verification, which achieved 3.15% and 9.13% respectively. Furthermore, the proposed system also improves the stability of continuous verification. In this work we show that the single step systems are inconsistent when analyzing small feature sets or slightly varied datasets. During both of these instances, the proposed system stays consistent, maintaining a high verification rate.
Mobile data traffic will exceed PC Internet traffic by 2020. As the number of smartphone users and the amount of data transferred per smartphone grow exponentially, limited battery power is becoming an increasingly critical problem for mobile devices which depend on the network I/O. Despite the growing body of research in power management techniques for the mobile devices at the hardware layer as well as the lower layers of the networking stack, there has been little work focusing on saving energy at the application layer for the mobile systems during network I/O. In this paper, we propose a novel technique, called FastHLA, that can achieve significant energy savings at the application layer during mobile network I/O without sacrificing the performance. FastHLA is based on historical log analysis and real-time dynamic tuning of mobile data transfers to achieve the optimization goal. FastHLA can increase the data transfer throughout by up to 10X and decrease the energy consumption by up to 5X compared to state-of-the-art HTTP/2.0 transfers.
Low-power and Lossy network (LLN) is commonly deployed in Internet of Things applications. It consists of a considerable amount of devices, also known as motes, with sensory capacity and wireless connectivity geographically spread in a wide area. Such devices face limitations in terms of energy, memory and processing. A common topology in LLN is a mesh-like one. The communication between these devices often happens in a multi-hop fashion due to their limited transmission range. Therefore, there is a need of efficient routing protocols for LLNs in terms of energy consumption and delivery rate. The present paper depicts a mesh routing protocol based on the Ant Colony Optimization meta-heuristic. Our proposed scheme has been integrated in the Contiki's Rime stack implementation. Simulation results demonstrate that the proposed approach had a better performance in terms of packet delivery rates, energy consumption and network lifetime.
User mobility impacts the performance of cellular networks. However, there is a lack of results on this problem because of the difficulty of the analysis. The existing results are limited to handover rate and mean path length that a user is associated with the same base station. In this paper, we derive the probability distribution function of the path length that a user will be associated with the same base station. It is assumed that the user travels along a straight path and it is associated to the nearest base station. We make the stochastic geometry assumption that the base stations are distributed over the area according to a Poisson point process. We provide simulation results as further evidence that the analysis is correct. The results of this paper may be useful in the design of cellular networks.
Vehicular Ad hoc NETworks (VANETs) will play a leading role in the next generation of wireless networks, as recent advances in vehicular networks have overcome the drawbacks of traditional wired networks. Providing safety and infotainment services to drivers and passengers. The simultaneous integration of heterogeneous wireless technologies will offer better services to individuals and allow service providers to present low-cost Applications to more users, and the question of how to provide seamless mobility for heterogeneous mobile networks is the driving force behind the research. In this paper, we present the current solutions for mobility and handoff management in vehicular networks, reviewed under different network layers. We highlight the advantages and disadvantages of the discussed solutions, explore related open reserach issues and point out future research directions.
Mobility has a dual role in underwater wireless sensor networks. On one hand, it might diminish the performance of networking protocols and impair network connectivity. On the other hand, it might improve data collection and boost networking-based services. In this paper, we discuss the potentials of exploring the controlled mobility of underwater sensor nodes to improve the performance of underwater sensor network applications. We highlight the advantages of the use of the mobility to boost the performance of several networking aspects of underwater sensor networks. Moreover, we discuss some of the challenges and we provide some guidelines when designing mobility-assisted networking protocols for underwater sensor networks. Finally, we point out some future research directions.
Utilizing the unlicensed spectrum for the Long Term Evolution (LTE), enables the service providers to significantly increase the capacity of the network. However, LTE has to coexist with other network technologies and especially Wi-Fi. LTE employs the Licensed Assisted Access (LAA) protocol and applies the Listen Before Talk (LBT) technique to efficiently share the communication channel with Wi-Fi. Nonetheless, there is still a big gap in throughput performance between the two technologies, with the LAA-LTE considerably outperforming Wi-Fi. This unfair behavior becomes more severe in a saturation mode scenario, where the two networks are fully loaded. In this paper, we try to ensure the fair coexistence between Wi-Fi and LTE by proportionally changing the maximum packet length for each network by applying an optimization and a heuristic approach. The ultimate goal is to equalize the overall throughput for Wi-Fi and LAA or equalize the individual throughput attained for each communicating device. An admission control scheme is also studied to allocate incoming users between the two networks, while trying to balance the throughput distribution. Our results show that by changing the ratio of packet length between Wi-Fi and LAA-LTE a fair share of the medium can be achieved.
Wireless sensor networks play an essential role in today's Internet of Things (IoT) systems. One of the most common applications is smart indoor spaces and detecting human activities in such areas. It is crucial for these systems, to collect data, analyze it, and make decisions based on the analysis. Although this is a quite well-defined pipeline of processing, the overall device energy consumption can be significant for wireless sensor systems to ensure the fidelity of data and longevity of the system. In this paper, we first discuss the energy requirements of common IoT applications using sensor networks. We focus on IoT systems for human activity detection in indoor spaces. Then, we propose a method to maximize energy efficiency for these smart spaces. Lastly, we experimentally demonstrate the effectiveness of our proposed method, both with simulation and with our real smart space deployment. Our smart environment deployment consists of a variety of sensors including ultrasonic, microwave and vibration sensors. We demonstrate that our energy efficiency method does not affect data quality, thereby maintaining the accuracy of human activity detection. Our method shows up to 30% energy efficiency improvement.
Microgrids enable a network of distributed energy generators to sustain energy needs off-the-grid. Microgrids can experience islanded operational mode, being this mode a time sensitive event that affects costs of power generation and distribution. The detection of time-sensitive events is important because the control unit needs to be aware of changes in the grid to avoid losses in power quality and costs. This requires a quality of service (QoS)-aware data aggregation and queuing mechanism in the core of the network infrastructure to convey microgrid data to a central server (considered as a macro base station). This paper investigates the impact of time sensitivity-based microgrid data aggregation on message delivery under different priority and time-sensitivity levels. Hence, we propose a framework to cluster the electrical data based on the time sensitivity criteria using unsupervised machine learning. We introduce a multi-class queuing system in the pico-cells to ensure that clustering reduces the processing time for high priority data. The results show that the proposed approach significantly reduces the delivery delay of messages carrying time sensitive events from the microgrid.
We consider a special type of Delay Tolerant Network (DTN), called "Local-Ferry-based network" (LFN), which enables communication among multiple nodes distributed over a geographic terrain. LFN utilizes controllable special purpose vehicles called "pigeons" to transfer messages among neighboring nodes: some (or all) nodes own these message ferries (a.k.a. pigeons), which help in establishing communication links among local nodes, cumulatively setting up the whole network. One research challenge is to schedule the pigeon movement between local nodes (i.e., deciding the pigeon's visiting sequence of the local nodes) to achieve good networking performance (e.g., message delay). Solving this research challenge poses promise for many exciting applications, such as using drones to enable communication among segregated regions in disaster recovery, to augment/connect in-situ IoT deployments, and more. In this paper, we address the above challenge whereas we contribute the following. First, we analyze what it takes to optimize the scheduling algorithm for a pigeon. Second, using the above results we design multiple variants of scheduling algorithms, and we compare their performance with the theoretical optimal delay and with the state-of-the-art algorithms through simulation experiments. Both theoretical analysis and simulation results show the efficacy of our solution. For instance, our best scheduling algorithm achieves (within 5% margin) the theoretical optimal (per-hop) message delay.
We assess the security mechanisms defined in the LoRaWAN specification and describe own research to show, whether those security mechanisms are sufficient. For this, we explain typical attacks on radiobased networks. We further show which precautions are necessary not to undermine these measures and whether additional security measures may be necessary.
Aggressive demand of future access network services is being translated into the stringent requirement on future backhaul infrastructure. It is not possible to take the backhaul resources for granted anymore; rather, more focused research is required to tackle the challenge of limited resources. It is also anticipated that, to meet the expectation of 5G, access and backhaul networks will work closely and therefore, total separation of their resources may not be possible anymore and joint operation is required. In this paper, we argue that, joint access-backhaul mechanisms is becoming necessary to ensure the best use of the scarce resources. We introduce the problem of statically assigning resources to capacity-limited backhaul links and we provide preliminary results to show the potential benefits of an intelligent access-aware backhaul capacity optimization scheme, where a central controller optimizes backhaul capacity according to corresponding access network requirements. Simulation results show that, with this approach, we are able to carry more traffic in a network limited by its backhaul capacity.
Cloud Radio Access Networks (C-RAN) is an evolution in the base station architecture, mainly composed of two elements: The Base Band Unit (BBU) and the Remote Radio Head (RRH). The BBU is a centralized pool of computational resources to provide the signal processing and coordination functionality required by all cells, while the RRHs are light radio units that User Equipment (UE) connects to via the RAN. Many advantages are derived from this architecture, such as dynamic BBU-RRH associations and statistical multiplexing gains. In particular, the BBU-RRH association problem is crucial for reducing power consumption. In this paper, we focus on decentralized BBU-RRH association, which has not received attention in the literature. Therefore, the aim of this work is to propose a hybrid two-stage approach that includes a game theoretic framework for the BBU-RRH association, and a centralized scheme to set the adequate number of available BBUs. The game among RRHs is solved by two different algorithms. The first relies on the best response algorithm, namely H-BR-IACA. The second is based on a reinforcement learning method (the replicator dynamics), namely H-DR-IACA. We compare our devised solution to a centralized approach proposed in a previous work. The results of our proposition show close performance to the centralized method.