Mobile data traffic is increasing explosively in recent years due to tremendous growth in demands from mobile users for multimedia contents. However, current mobile networking technologies, including network architectures and data transmission techniques, cannot support the anticipated traffic load without degrading mobile user's quality of service (QoS) or quality of experience (QoE). Much ongoing research efforts are targeted at developing technologies for fifth generation (5G) mobile Internets and beyond to overcome these limitations. Content-centric edge caching has recently emerged as a promising technique to satisfy the demands of popular multimedia contents that are requested repeatedly by multiple mobile users over a period of time. This talk will motivate and explore the design principles and goals of content-centric edge caching. We shall present a generalized architectural framework as a basis of differentiating different edge caching designs. We shall present three trace-driven case studies involving a single tier cellular network, a multi-tier heterogeneous cellular network, and a two-tier caching scheme involving a cellular network and device-to-device communications, to illustrate the optimization of some design alternatives. We shall conclude the talk with a discussion of research opportunities and challenges in content-centric edge caching.
Since the development of the 4G LTE standards around 2010, the research communities both in academia and industry have been brainstorming to predict the use cases and scenarios of 2020s, to determine the corresponding technical requirements, and to develop the enabling technologies, protocols, and network architectures towards the next-generation (5G) wireless standardization. This exploratory phase is winding down as the 5G standards are currently being developed with a scheduled completion date of late-2019; the 5G wireless networks are expected to be deployed globally throughout 2020s. As such, it is time to reinitiate a similar brainstorming endeavour followed by the technical groundwork towards the subsequent generation (6G) wireless networks of 2030s. One reasonable starting point in this new 6G discussion is to reflect on the possible shortcomings of the 5G networks to-bedeployed. 5G promises to provide connectivity for a broad range of use-cases in a variety of vertical industries; after all, this rich set of scenarios is indeed what distinguishes 5G from the previous four generations. Many of the envisioned 5G use-cases require challenging target values for one or more of the key QoS elements, such as high rate, high reliability, low latency, and high energy efficiency; we refer to the presence of such demanding links as the super-connectivity.
Wireless communication systems are irreversibly changing our lives. Today, wireless networks are extremely complex systems and they are evolving towards more complex ones because of the increasing diversity and heterogeneity of applications, devices, quality requirements and standards. At the same time, resources used by wireless communications are either naturally limited e.g., time, spectrum, or need to be optimally exploited e.g., energy, computation, infrastructure. Hence, traditional resource allocation approaches that are based on optimization and heuristic techniques start to show their limitations. Those approaches are often centrally-managed, reactive, and not adaptive. They also require a huge amount of control data exchange. Hence, there is a need for new approaches to provide adaptive, proactive and self-organized networking solutions. Thanks to the availability of increasingly powerful computing systems and of huge amount of data that can be efficiently exploited in wireless networks, we envision the employment of machine learning techniques in order to achieve intelligent, adaptive, resource-efficient and data-driven future wireless networks. This talk discusses how wireless network designers and operators can employ and adopt advanced machine learning techniques for adding predictive and adaptive intelligence to the system. The state of the art of using machine learning in wireless networks will be deeply discussed and some interesting issues for new research avenues will be identified.
Multimedia data traffic will explosively increase in vehicular networking scenarios as the current advances in vehicular communication technologies and connected cars penetrate on the market. However, current data dissemination protocols and even the host-centric content delivery paradigm will not support the anticipated traffic load without degrading vehicular applications QoS/QoE. Recent research efforts are proposing the information-centric networking paradigm as a viable solution for handling multimedia content distribution in vehicular networks and connected and autonomous vehicles [1-10].
Many applications in Wireless Sensor Networks (WSNs) require collecting massive data in a coordinated approach. To that end, a many-to-one (convergecast) communication pattern is used in tree-based WSNs. However, traffic near the sink node usually becomes the network bottleneck. In this work, we propose an extension to the 802.15.4 standard for enabling wider bandwidth channels. Then, we measure the speed of data collection in a tree-based WSN, with radios operating in these wider bandwidth channels. Finally, we propose and implement Funneling Wider Bandwidth (FWB), an algorithm that minimizes schedule length in networks. We prove that the algorithm is optimal in regard to the number of time slots. In our simulations and experiments, we show that FWB achieves a higher average throughput and a smaller number of time slots. This new approach could be adapted for other relevant emerging standards, such as WirelessHART, ISA 100.11a and IEEE 802.15.4e TSCH.
We study tradeoffs between aggregation convergecast and energy harvesting in wireless sensor networks. Existing aggregation convergecast algorithms do not capture the volatile nature of energy reserves of energy harvesting nodes. We therefore propose, and evaluate, new scheduling schemes to address this gap. We also introduce metrics to capture the impact of the inevitable energy depletion on the quantity of aggregated data received at a sink node. Specifically, we consider node behaviors where the inability to perform prompt communication due to energy depletion results in a reduction of the sampling rate (including for aggregated data) and, if it persists, loss of data. The performance evaluation is based on heat flow data collected from an apartment building in the Canadian North. The collected heat flow data are used to approximate the energy harvesting output of thermoelectric harvesters.
In Wireless Sensor Networks (WSNs), the power consumption is an important aspect when designing routing protocols. When compared to other components of a sensor node, the power required by radio transmitters is responsible for most of the consumption. One way to optimize the energy consumption is by using energy-aware protocols. Such protocols take into consideration the residual energy information (i.e., remaining battery power) when making decisions, providing energy efficiency through the careful management of energy consumption. In this work, we go further and propose a new routing protocol that uses not only the residual energy information but also the available renewable energy information from renewable energy sources such as solar cells. We then present the REBORN (Renewable Energy-Based Routing) algorithm, an energy-aware geographic routing algorithm, capable of managing both the residual and the available energy. Our results clearly show the advantages and the efficiency achieved by our REBORN algorithm when compared to other proposed energy-aware approaches.
Unmanned Aerial Vehicle (UAV) systems are being increasingly used in a broad range of applications requiring extensive communications, either to interconnect the UAVs with each other or with ground resources. Focusing either on the modeling of UAV operations or communication and network dynamics, available simulation tools fail to capture the complex interdependencies between these two aspects of the problem. The main contribution of this paper is a flexible and scalable open source simulator -- FlyNetSim -- bridging the two domains. The overall objective is to enable simulation and evaluation of UAV swarms operating within articulated multi-layered technological ecosystems, such as the Urban Internet of Things (IoT). To this aim, FlyNetSim interfaces two open source tools, ArduPilot and ns-3, creating individual data paths between the devices operating in the system using a publish and subscribe-based middleware. The capabilities of FlyNetSim are illustrated through several case-study scenarios including UAVs interconnecting with a multi-technology communication infrastructure and intra-swarm ad-hoc communications.
The widespread exploitation of cloud technologies forces cloud providers to forecast peaks of requests to guarantee always the adequate quality of service to their currently served customers. Static resource provisioning is rarely affordable for large Data Center Networks (DCNs) and dynamic resource management can be rather complex, in particular for networking. Hence, we claim the relevance of simulator-based approaches, helpful in planning DCN deployment and in analyzing performance behaviors in response to expected traffic patterns. However, existing cloud simulators exhibit non-negligible limitations for what relates to the modeling of networking issues of cloud Infrastructure as a Service (IaaS) deployments. Therefore, we propose DCNs-2, a novel extension package for the ns-2 simulator, as a valid solution to efficiently simulate DCNs with all their primary entities, such as switches, physical machines, racks, virtual machines, and so on.
To achieve device-free person detection, various types of signal features, such as moving statistics and wavelet representations, have been extracted from the Wi-Fi Received Signal Strength Index (RSSI), whose value fluctuates when human subjects move near the Wi-Fi transceivers. However, these features do not work effectively under different deployments of Wi-Fi transceivers because each transceiver has a unique RSSI fluctuation pattern that depends on its specific wireless channel and hardware characteristics. To address this problem, we present WiDet, a system that uses a deep Convolutional Neural Network (CNN) approach for person detection. The CNN achieves effective and robust detection feature extraction by exploring distinguishable patterns in Wi-Fi RSSI data. With a large number of internal parameters, the CNN can record and recognize the different RSSI fluctuation patterns from different transceivers. We further apply the data augmentation method to improve the algorithm robustness to wireless interferences and pedestrian speed changes. To take advantage of the wide availability of the existing Wi-Fi devices, we design a collaborative sensing technique that can recognize the subject moving directions. To validate the proposed design, we implement a prototype system that consists of three Wi-Fi packet transmitters and one receiver on low-cost off-the-shelf embedded development boards. In a multi-day experiment with a total of 163 walking events, WiDet achieves 94.5% of detection accuracy in detecting pedestrians, which outperforms the moving statistics and the wavelet representation based approaches by 22% and 8%, respectively.
Mobile Crowdsensing (MCS) applications take advantage of the ubiquity and sensing power of smartphones in data gathering. Designing an incentive mechanism for motivating the individuals to participate in such systems is vital. Reverse Auction (RA) is a popular framework in which the participants bid their expected returns for their contributions, and a task creator selects a subset of them with a view to maximise the cumulative contribution within a prescribed budget. In RA, the participants are not aware of their winning probability before the auction is closed. If the participants are given some statistical information about the returns associated with their bid, they may reduce their bid in order to increase their returns. In this paper, we propose Bid-Revisable Reverse Auction (BRRA), as well as an enhancement called BRRA with Virtual Contribution (BRRA-VC), wherein the participants are allowed to revise their bids during the auction, based on the feedback they receive about the winning probability of their submitted bids. Through conducting extensive experiments, we show that in comparison to RA, the BRRA schemes not only benefit the task creator by increasing the return on investment (i.e., the total contribution for the same budget) and also by decreasing the participant dropout ratio, but also profit the participants who are open to revise their bids by increasing their received rewards as well as their winning chances.
In the process of network operation, a large number of alerts are generated every day, which reflect the occurrence of some abnormal conditions. Traditional methods depend too much on the knowledge of equipment manufacturers and industry experts, so we need some novel ways to overcome this problem in the network management. The application of data mining technology to alarm pattern analysis has become the focus of current research. Researchers developed many kinds of algorithms fitting different application characteristics. This paper proposes the concept of association matrix pattern mining, which means that before mining the data, we use the multi-dimensional information of the data to construct the association matrices between the items. And we develop a conditional pattern mining algorithm based on the association matrix which aims to find out less but more meaning results. Our experiments validate that with the multi-dimensional information stored in association matrix, the algorithm performs better than traditional pattern mining methods in finding out the detailed alarm pattern from network alarm flood.
The main objective of this paper is to present a new accurate power profiler for embedded systems and smartphones. The second objective is, for it, to be a tutorial explaining the main steps to build power profilers for embedded and mobile systems, in general. We start our work by firstly describing the general methodology of building a power profiler. Then, we showcase how each step is undertaken to build a profiler with two power models. The first one was an artificial neural network (called N2) that presented a lot of noise in its estimation. After debugging and improvement, the second model, a NARX neural network (we call N3) was built. It eliminated all the drawback of the first model and had a mean absolute percentage error of 2.8%.
Industrial networks differ from others kinds of networks because they require real-time performance in order to meet strict requirements. With the rise of low-power wireless standards, the industrial applications have started to use wireless communications in order to reduce deployment and management costs. IEEE802.15.4-TSCH represents currently a promising standard relying on a strict schedule of the transmissions to provide strong guarantees. However, the radio environment still exhibits time-variable characteristics. Thus, the network has to provision sufficient resource (bandwidth) to cope with the worst case while still achieving high energy efficiency. The 6TiSCH IETF working group defines a stack to tune dynamically the TSCH schedule. In this paper, we analyze in depth the stability and the convergence of a 6TiSCH network in an indoor testbed. We identify the main causes of instabilities, and we propose solutions to address each of them. We show that our solutions improve significantly the stability.
One of the most recent and reliable MAC protocols for low-rate wireless personal area networks is the IEEE802.15.4-TSCH. The formation of an IEEE802.15.4-TSCH network depends on the periodic transmission of Enhanced Beacons (EBs), and, by extension, on the scheduling of EB transmissions. In this paper, we present and analyze a negative phenomenon that can occur in most of the autonomous EB scheduling methods proposed in the literature. This phenomenon, which we call full collision, takes place when all the neighboring EB transmissions of a joining node collide. As a consequence, a node may not be able to join the network fast, consuming a considerable amount of energy as well. In order to eliminate collisions during EB transmissions, and, thus, to avoid the occurrence of this phenomenon, we propose a novel autonomous collision-free EB scheduling policy. The results of our simulations demonstrate the superiority of our policy compared to two other recently proposed policies.
The Industrial Internet of Things (IIoT) faces multiple challenges to achieve high reliability, low-latency and low power consumption. The IEEE 802.15.4 Time-Slotted Channel Hopping (TSCH) protocol aims to address these issues by using frequency hopping to improve the transmission quality when coping with low-quality channels. However, an optimized transmission system should also try to favor the use of high-quality channels, which are unknown a priori. Hence reinforcement learning algorithms could be useful.
In this work, we perform an evaluation of 9 Multi-Armed Bandit (MAB) algorithms--some specific learning algorithms adapted to that case--in a IEEE 802.15.4-TSCH context, in order to select the ones that choose high-performance channels, using data collected through the FIT IoT-LAB platform. Then, we propose a combined mechanism that uses the selected algorithms integrated with TSCH. The performance evaluation suggests that our proposal can significantly improve the packet delivery ratio compared to the default TSCH operation, thereby increasing the reliability and the energy efficiency of the transmissions.
To overcome the high propagation loss and satisfy a given link budget, millimeter wave (mmW) communication systems rely on highly directional antennas, both at the base station (BS) and the user equipment (UE). Due to this directionality, initial access (IA) and association can be particularly challenging. Existing approaches for IA in directional networks suffer from long discovery time and/or high misdetection probability of the UE. In this paper, we propose FastLink, an efficient IA protocol for mmW systems with electronically steerable antennas. FastLink always transmits/receives using the narrowest possible beam, allowing high beamforming gains and low misdetection rate. It uses a unique binary-search-based algorithm, called 3DPF, to scan only a small subset of the angular space and find in logarithmic time the best transmit-receive beam pair. We formulate the beam-finding process as a sparse problem, exploiting the poor scattering nature of mmW channels. Compressive sensing is then used to determine the minimum number of measurements needed to reconstruct the sparse channel. 3DPF is incorporated into FastLink to establish the directional link, and the required messaging between the BS and the UE is explained in detail. For performance evaluation purposes, we first conduct simulations based on NYU mmW channel model and then experiment with a custom mmW testbed utilizing uniform planar arrays and operating at $29$ GHz frequency. Our extensive simulations and hardware experiments verify the efficiency of FastLink, and show that 3DPF can reduce the search time by $65-99%$ compared to 802.11ad-like beam finding scheme.
The advent of the next iteration of mobile and wireless communication standards, the so called 5G, is already a reality. 3GPP released in December 2017 the first set of specifications of the 5G New Radio (NR), which introduced important innovations with respect to legacy networks. One of the main novelties is the use of very-high frequencies in the radio access, which requires highly-directional transmissions or beams to overcome the severe propagation losses. Therefore, it is paramount to manage these beams in an efficient manner in order to always choose the optimum set of beams. In this work, we describe the first NR-compliant beam management framework for the ns-3 network simulator. We aim at providing an open-source and fully-customizable solution to let the scientific community implement their solutions and assess their impact on the end-to-end network performance. Additionally, we describe the necessary modifications in ns-3 to align the radio frame structure to what the 3GPP standards mandate. Finally, we validate our results by running a simple mobility scenario.
Fog computing is a promising solution to provide low-latency and ubiquitously available computation offloading services to widely distributed Internet of Things (IoT) devices with limited computing capabilities. One obstacle, however, is how to seamlessly hand over mobile IoT devices among different fog nodes to avoid service interruption. In this paper, we propose seamless fog (sFog), a new framework supporting efficient congestion control and seamless handover schemes. Intrinsically, sFog improves system performance during handovers (achieved by the handover scheme), and guarantees the performance does not degrade when handovers do not occur (achieved by the congestion control scheme). Through the congestion control scheme, jobs are efficiently offloaded without causing unnecessary system idling; through the handover scheme, jobs are pre-migrated to the target fog node when a handover is about to occur, in order to reduce migration delay. In order to evaluate the performance of sFog, we propose a theoretical framework and establish a real-world prototype. Both the theoretical and experimental results show that sFog achieves substantial delay reductions compared with traditional benchmark handover schemes.
In this paper, we focus on tasks offloading over two tiered mobile cloud computing environment. We consider several users with energy constrained tasks that can be offloaded over cloudlets or on a remote cloud with differentiated system and network resources capacities. We investigate offloading policy that decides which tasks should be offloaded and determine the offloading location on the cloudlets or on the cloud. The objective is to minimize the total energy consumed by the users. We formulate this problem as a Non-Linear Binary Integer Programming. Since the centralized optimal solution is NP-hard, we propose a distributed linear relaxation heuristic based on Lagrangian decomposition approach. To solve the subproblems, we also propose a greedy heuristic that computes the best cloudlet selection and bandwidth allocation following tasks' energy consumption. We compared our proposal against existing approaches under different system parameters (e.g. CPU resources), variable number of users and for six applications, each having specific traffic pattern, resource demands and time constraints. Numerical results show that our proposal outperforms existing approaches. We also analyze the performance of our proposal for each application.
Mobile Edge Computing (MEC) is essential for enabling new innovative technologies that depend on low-latency computation environments such as Augmented Reality (AR). As AR applications continue to deliver better graphics with richer interactive features, AR devices will increasingly rely on nearby cloudlets to assist with the demanding computation requirements of AR applications. Supporting multiplayer interactions in an MEC environment brings many challenges. Processing user interactions can be computation-intensive especially when multiple users in close proximity to each other are acting simultaneously; the limited resources of a cloudlet could be overwhelmed if there are too many players involved. In this paper, we envision a scenario in the near future where players wearing AR heads-up display devices engage with other players over a large area with densely deployed cloudlets. We first propose a novel system model, and then formulate the Decentralized Multiplayer Coordination (DMC) Problem with the aim of minimizing the game frame duration among players, and devise an efficient algorithm for the problem. We finally evaluate the performance of the proposed algorithm through experimental simulations. Experimental results demonstrate that the proposed algorithm is promising.
We consider multi-hop wireless mesh networks and examine whether capacity may be improved by distributing the data flows for each origin-destination (OD) pair across multiple routes. The network geometry and the application of technologies, such as beamforming or full-duplex, are described by a matrix that describes which pairs of transmissions (or 'links') are compatible in that they can occur simultaneously without collisions or other conflicts. Under a binary interference framework, the conflict matrix is used to derive the maximal sets of compatible links, and secondly, these are used to derive a system of linear inequalities that bounds links' data flows. These steps are computationally expensive, but we show how the system usually collapses when re-expressed in terms of flows on routes. The theory is illustrated in terms of a simple 'Braess' network example with a single OD pair. We then consider networks with two OD pairs whose data flows have some nodes in common and thus contend with each other. We show how to design networks that exploit multiple relay nodes and routes so as to increase capacity. We then examine the same problem on ensembles of random networks. We find that in many cases, capacity can be improved if the OD pairs distribute their traffic over several routes. We pose and solve a set of linear programs that model (i) cooperative behavior; (ii) the optimization of one OD pair when presented with a fixed route assignment by the other; and (iii) variants of these games when both OD pairs are in contention with background (single-hop) traffic.
There is rising interest in applying SDN principles to wireless multi-hop networks, as this paves the way towards bringing the programmability and flexibility that is lacking in today's distributed wireless networks (ad-hoc, mesh or sensor networks) with the promising perspectives of better mitigating issues as scalability, mobility and interference management and supporting improved controlled QoS services.
This paper investigates this latter aspect and proposes an Integer Linear Programming (ILP) based wireless resource allocation scheme for the provision of point-to-point and point-to-multipoint end-to-end virtual links with bandwidth requirements in software-defined multi-radio multi-channel wireless multi-hop networks. The proposed scheme considers the specificities of wireless communications: the broadcast nature of wireless links which can be leveraged for point-to-multipoint links resource allocations, and, the interference between surrounding wireless links. It also considers switching resource consumption of wireless nodes since, for the time being, the size of SDN forwarding tables remains quite limited. A Genetic Algorithm derived from the ILP formulation is also proposed to address the case of large wireless networks. Our simulation results show that both methods work effectively.
Oceans are a great unknown. To change this worryingly reality, underwater wireless sensor networks (UWSNs) have been proposed for the automated and real-time data collection from ocean, including the life and events beneath them. Currently, the underwater acoustic channel is the most viable technology for long-range underwater wireless communication, but its use impairs the data collection in UWSNs. It presents strong signal absorption and is severely affected by human-made and natural noise in the aquatic environment. Therefore, data collection in UWSNs is unreliable. In the recent years, opportunistic routing has been proposed to improve UWSN communication's reliability and, consequently, data delivery. However, not always the proposed opportunistic routing protocols will perform well, as the neighborhood configuration of a node might not be dense enough or at a maximum distance that would favor data communication. In this paper, we proposed the power control-based opportunistic routing protocol, named PCR, for reliable and energy-efficient data delivery in UWSNs. The proposed PCR protocol selects the most suitable transmission power level at each underwater sensor node, aimed at improving the packet delivery probability at each hop. To avoid the selection of high power transmission and the uncontrolled inclusion of neighboring nodes in the next-hop candidate set, which would drastically increase the energy consumption, the PCR protocol considers the energy waste that will occur in each neighboring underwater sensor node. Numerical results showed that PCR improves the packet delivery probability and reduces the energy waste for data delivery by adjusting the proper transmission power and selecting the suitable candidate set, leading to energy conservation when compared with related proposals presented in the literature.
Cellular networks are susceptible to being severely capacity-constrained during peak traffic hours or at special events such as sports and concerts. Many other applications are emerging for LTE and 5G networks that inject machine-to-machine (M2M) communications for Internet of Things (IoT) devices that sense the environment and react to diurnal patterns observed. Both for users and devices, the high congestion levels frequently lead to numerous retransmissions and severe battery depletion. However, there are frequently social cues that could be gleaned from interactions from websites and social networks of shared interest to a particular region at a particular time. Cellular network operators have sought to address these high levels of fluctuations and traffic burstiness via the use of offloading to unlicensed bands, which may be instructed by these social cues. In this paper, we leverage shared interest information in a given area to conserve power via the use of offloading to the emerging Citizens Radio Band Service (CBRS). Our GreenLoading framework enables hierarchical data delivery to significantly reduce power consumption and includes a Broker Priority Assignment (BPA) algorithm to select data brokers for users. With the use of in-field measurements and web-based Google data across four diverse U.S. cities, we show that, on average, an order of magnitude power savings via GreenLoading over a 24-hour period and up to 35 times at peak traffic times. Finally, we consider the role that a relaxation of wait times can play in the power efficiency of a GreenLoaded network.
Device-to-device (D2D) communications is a promising technique for improving the efficiency of 5G networks. Employing channel adaptive resource allocation can yield to a large enhancement in almost any performance metric of D2D communications (e.g. Energy Efficiency). Centralized approaches require the knowledge of D2D links' Channel State Information (CSI) at the BS level. However, CSI reporting suffers from the limited number of resources available for feedback transmission. Alternately, we propose a distributed algorithm for resource allocation that benefits from the users' knowledge of their local CSI in order to minimize the users' transmission power while maintaining predefined throughput constraint. The key idea is that users compute their local performance metrics (e.g. energy efficiency) and then use a new signaling mechanism to share these values between each other. Under some condition, the performance of this distributed algorithm achieves that of theideal scheduling (i.e. with a global CSI knowledge of all the D2D links). We describe how this technique can be simply implemented by adapting existing CSI reporting (e.g. in Long-Term Evolution (LTE) systems). Furthermore, numerical results are presented to corroborate our claims and demonstrate the gain that the proposed distributed scheduling brings to cellular networks compared to the bestcentralized-limited feedback scheduling.
Heterogeneous Cloud Radio Access Networks (H-CRANs) are a promising cost-effective architecture for 5G system which incorporates the cloud computing into Heterogeneous Networks (HetNets). We consider in this work the joint beamforming and clustering (user-to-Remote Radio Head (RRH) association) issue for downlink H-CRAN to solve the sum-rate maximization problem under fronthaul link capacity and per-RRH power constraints. The main objective is to address the beamforming and user association process over time by taking into account the user mobility as a key factor to tune the solution's parameters. More precisely, based on the mobility profile of users (mainly velocity), we propose an advanced Mobility-Aware Beamforming and User Clustering (MABUC) algorithm which selects the best Channel State Information (CSI) feedback strategy and periodicity to achieve the targeted sum-rate performance while ensuring the minimum possible cost (complexity and CSI signaling). MABUC inherits the behavior of our previously proposed Hybrid algorithm which periodically activates dynamic and static clustering strategies to manage the allocation process over time. MABUC algorithm, however, takes into account the user mobility by using a CSI estimation model which can improve the algorithm performance compared to reference schemes. Our proposed algorithm has the benefit to meet the targeted sum-rate performance while being aware and adaptive to practical system constraints such as mobility, complexity and signaling costs.
In the last decade, underwater wireless sensor networks (UWSNs) have attracted a lot of attention from the research community thanks to their wide range of applications that include seabed mining, military and environmental monitoring. With respect to terrestrial networks, UWSNs pose new research challenges such as the three-dimensional node deployment and the use of acoustic signals. Despite the large number of routing protocols that have been developed for UWSNs, there are very few analytical results that study their optimal configurations given the system's parameters (density of the nodes, frequency of transmission, etc.). In this paper, we make one of the first steps to cover this gap. We study an abstraction of an opportunistic routing protocol and derive its optimal working conditions based on the network characteristics. Specifically, we prove that using a depth threshold, i.e., the minimum length of one transmission hop to the surface, is crucial for the optimality of opportunistic protocols and we give a numerical method to compute it. Moreover, we show that there is a critical depth threshold above which no packet can be transmitted successfully to the surface sinks in large networks, which further highlights the importance of properly configuring the routing protocol. We discuss the implications of our results and validate them by means of stochastic simulations on NS3.
Video streaming is a dominant contributor to the global Internet traffic. Consequently, gauging network performance w.r.t. the video Quality of Experience (QoE) is of paramount importance to both telecom operators and regulators. Modern video streaming systems, e.g. YouTube, have huge catalogs of billions of different videos that vary significantly in content type. Owing to this difference, the QoE of different videos as perceived by end users can vary for the same network Quality of Service (QoS). In this paper, we present a methodology for benchmarking performance of mobile operators w.r.t Internet video that considers this variation in QoE. We take a data-driven approach to build a predictive model using supervised machine learning (ML) that takes into account a wide range of videos and network conditions. To that end, we first build and analyze a large catalog of YouTube videos. We then propose and demonstrate a framework of controlled experimentation based on active learning to build the training data for the targeted ML model. Using this model, we then devise YouScore, an estimate of the percentage of YouTube videos that may play out smoothly under a given network condition. Finally, to demonstrate the benchmarking utility of YouScore, we apply it on an open dataset of real user mobile network measurements to compare performance of mobile operators for video streaming.
QUIC is a new transport layer protocol proposed by Google that is rapidly increasing its share from Internet traffic. It is designed to improve performance for HTTPS connections and partly replace TCP, the dominant standard of Internet for decades, in application scenarios where new requirements such as packet encryption, stream multiplexing and connection migration are emerging and which have proven to be challenging for the TCP service model. QUIC has been massively deployed to serve some of the most popular Internet services, including YouTube. To enable easy deployment and rapid evolution to the protocol, the current deployment of QUIC runs in user-space, usually as part of the Chrome/Chromium browser. This potentially reduces the achievable performance of the protocol, as each message, including control messages, triggers a context switch between kernel and user spaces. To investigate the potential performance of QUIC in kernel mode and to achieve a fair comparison between QUIC and TCP, we implement QUIC in the Linux kernel where TCP and other transport layer protocols are running. We have conducted extensive measurements in both virtual machines and in a custom-built WIFI testbed to compare the two protocols. The empirical results indicate that QUIC outperforms TCP in major application scenarios such as network with low latency and high packet loss rate, while QUIC also shows a TCP-friendly rate control when the two protocols are running concurrently.
Today Wireless Sensor Networks (WSNs) can be found in many application areas, e.g. they are used in smart home systems or in industrial settings in order to monitor and control machinery. The latter usually requires a guaranteed performance of the network. This mainly addresses the real-time capability of the network, i.e. knowledge of possible delays which occur on the multi-hop route between sender and receiver is needed. In this paper, a novel approach to model multi-hop networks with a fixed number of nodes uniformly distributed on a square area is proposed. Accordingly, no explicit knowledge of the routing and the topology is needed. That is, no specific network is reflected, but a class of random networks sharing common properties is modelled. In our model, all nodes generate frames according to a Poisson process with the same rate and send them to a common gateway. Based on this scenario, we deduce a mathematical model for the probability of simultaneous transmissions assuming that a generic CSMA MAC protocol is implemented. We use this result in order to derive an expression for the mean delays arising on the links, i.e. the medium access and queueing delay, of the multi-hop route in the special case of the IEEE 802.15.4 MAC and PHY protocols being used. The model results correspond well to an OMNeT++ simulation if the offered traffic does not exceed a certain limit. Beyond this limit, up to the capacity of the network, the model still captures the behaviour of the simulation but the deviation grows.
In Vehicular Ad Hoc Networks (VANETs), nodes periodically share beacons in order to convey information about identity, velocity, acceleration, and position. Truthful positioning of nodes is essential for the proper behavior of applications, including the formation of vehicular platoons. Incorrect position information can cause problems such as increased fuel consumption, reduced passenger comfort, and in some cases even accidents. In this paper, we design and evaluate Vouch: a secure proof-of-location scheme tailored for VANETs. The scheme leverages the node positioning capability of fifth generation (5G) wireless network roadside units. The key idea of Vouch is to disseminate periodic proofs of location, combined with plausibility checking of movement between proofs. We show that Vouch can detect position falsification attacks in high-speed scenarios without incurring a large overhead.
Slicing of a 5G network by creating virtualized instances of network functions facilitates the support of different service types with varying requirements. The management and orchestration layer identifies the components in the virtualization infrastructure to form an end-to-end slice for an intended service type. The key security challenges for the softwarized 5G networks are, (i) ensuring availability of a centralized controller/orchestrator, (ii) association between legitimate network slice components, and (iii) network slice isolation. To address these challenges, in this paper, we propose a novel implicit mutual authentication and key establishment with group anonymity protocol using proxy re-encryption on elliptic curve. The protocol provides (i) controller independent distributed association between components of a network slice, (ii) implicit authentication between network slice components to allow secure association, (iii) secure key establishment between component pairs for secure slice isolation, and (iv) service group anonymity. The proposed protocol's robustness is validated with necessary security analysis. The computation and bandwidth overheads of the proposed protocol are compared with that of the certificate based protocol, and our proposed protocol has 9.52% less computation overhead and 13.64% less bandwidth overhead for Type A1 pairing.
Recent work shows that an adversary can exploit a coupling effect induced by hidden nodes to launch a cascading attack causing global congestion in a Wi-Fi network. The underlying assumption is that the power of interference caused by a hidden node is an order of magnitude stronger than the signal sent to the receiver. In this paper, we investigate the feasibility of cascading attacks with weakly interfering hidden nodes, that is when the signal-to-interference ratio is high. Through extensive ns-3 simulations, including for an indoor building model, we show that cascading attacks are still feasible. The attacks leverage two PHY-layer phenomena: receiver capture and bit rate adaptation. We show that the attack relies on a coupling effect, whereby the average bit rate of a transmission pair drops sharply as the channel utilization of a neighboring pair gets higher. This coupling effect facilitates the propagation of the attack throughout the network.
In this paper we evaluate the feasibility of running a lightweight Intrusion Detection System within a constrained sensor or IoT node. We propose mIDS, which monitors and detects attacks using a statistical analysis tool based on Binary Logistic Regression (BLR). mIDS takes as input only local node parameters for both benign and malicious behavior and derives a normal behavior model that detects abnormalities within the constrained node.We offer a proof of correct operation by testing mIDS in a setting where network-layer attacks are present. In such a system, critical data from the routing layer is obtained and used as a basis for profiling sensor behavior. Our results show that, despite the lightweight implementation, the proposed solution achieves attack detection accuracy levels within the range of 96% - 100%.
Data aggregation is widely used in wireless sensor networks (WSNs) due to the resource constraints of computational capability, energy and bandwidth. Because WSNs are often deployed in an unattended hostile environment, WSNs are prone to various attacks. The traditional security technologies such as privacy protection and encryption technology can not address the attacks from the internal nodes of network. Therefore, the trust management mechanism for data aggregation has become a hot research topic, and an efficient trust management mechanism plays an important role in data aggregation.
In this paper, we propose an efficient trust model based on context and data density correlation degree. Our proposed trust model consists of three major contexts, sensing trust, link trust, node trust. We take into full account data aggregating characteristic and different impacts of node trust, link trust and sensing trust on the secure of data aggregation. We also take into account data correlation degree in computing sensing trust, which leads to more accurate trust result.
The experiment results show that compared to the existing trust models our proposed trust model provides more accurate sensing trust and improves the throughput and robustness against malicious attacks. Our proposed trust model is more suitable for data aggregation than conventional trust models.
The Internet of Things (IoT) consists of network connections among devices for the development of predefined activities. A difficulty in IoT is the integration among services, which hinders interoperability and is related to the creation of a procedure that collects, stores and processes data from different sources and formats. This paper introduces an approach for the storage and retrieval of multiple sensor data sources that provides a RESTful API for the management of multiple database types and data formats. The evaluation scenario consists of the integration of the procedure with a data source containing information on the climate of worldwide cities. Data were imported through a process that enables their storage in PostgreSQL and MongoDB exposing an API that supports JSON and XML data format. The performance evaluation methodology includes a workload test and an influence factor analysis. The results show a comparison of different strategies for data conversion and storage and better performance of PostgreSQL and JSON in comparison to MongoDB and XML.
There are many mobility models in the literature with diverse formats and origins. Besides the existence of studies that analyze and characterize these models, there is a need for a framework that can compare them in an easy way. MOCHA (Mobility framework for CHaracteristics Analysis) is a tool that characterizes and makes possible the comparison of mobility models without any hard work. We implemented 9 social, spatial and temporal characteristics, which were extracted from various (real and synthetic) distinct mobility traces. MOCHA has a classifying module that attributes each characteristic the statistic distribution that better describes it. As a validation process, all the traces were compared using the T-SNE method for data visualization, resulting in the approximation of similar traces. One of the advantages of using MOCHA is its ease of use, being able to read diverse traces formats and converting them to its standard format, allowing that different types of traces, such as check-in, GPS, contacts, and so on, to be compared. The metrics used in the tool can become a standard for trace analysis and comparison in the literature, allowing a better vision of where one trace belongs related to others. MOCHA is available for download at https://github.com/wisemap-ufmg/MOCHA.
Mobility and network traffic have been traditionally studied separately. Their interaction is vital for generations of future mobile services and effective caching, but has not been studied in depth with real-world big data. In this paper, we characterize mobility encounters and study the correlation between encounters and web traffic profiles using large-scale datasets (30TB in size) of WiFi and NetFlow traces. The analysis quantifies these correlations for the first time, across spatio-temporal dimensions, for device types grouped into on-the-go Flutes and sit-to-use Cellos. The results consistently show a clear relation between mobility encounters and traffic across different buildings over multiple days, with encountered pairs showing higher traffic similarity than non-encountered pairs, and long encounters being associated with the highest similarity. We also investigate the feasibility of learning encounters through web traffic profiles, with implications for dissemination protocols, and contact tracing. This provides a compelling case to integrate both mobility and web traffic dimensions in future models, not only at an individual level, but also at pairwise and collective levels.
Modeling mobility is a key aspect when simulating different types of networks. To cater to this requirement, a large number of models has emerged in the last years. They are typically (a) trace-based, where GPS recordings are re-run in simulation, (b) synthetic models, which describe mobility with formal methods, or (c) hybrid models, which are synthetic models based on statistically evaluated traces. All these families of models have advantages and disadvantages. For example, trace-based models are very inflexible in terms of simulation scenarios, but have realistic behaviour, while synthetic models are very flexible, but lack realism. In this paper, we propose a new mobility model, called TRAILS (TRAce-based ProbabILiStic Mobility Model), which bridges the gap between these families and combines their advantages into a single model. The main idea is to extract a mobility graph from real traces and to use it in simulation to create scalable, flexible simulation scenarios. We show that TRAILS is more realistic than synthetic models, while achieving full scalability and flexibility. We have implemented and evaluated TRAILS in the OMNeT++ simulation framework.
We consider a VoD (Video on-Demand) platform designed for vehicles traveling on a highway or other major roadway. Typically, cars or buses would subscribe to this delivery service so that their passengers get access to a catalog of movies and series stored on a back-end server. Videos are delivered through IEEE 802.11p Road Side Units deployed along the highway. In this paper, we propose a simple analytical and yet accurate solution to estimate (at the speed of a click) two key performance parameters for a VoD platform: (i) the total amount of data downloaded by a vehicle over its journey and (ii) the total "interruption time'', which corresponds to the time a vehicle spends with the playback of its video interrupted because of an empty buffer. After validating its accuracy against a set of simulations run with ns-3, we show an example of application of our analytical solution for the sizing of an IEEE 802.11p-based VoD platform.
Efficient data delivery in vehicular named data networks (VNDNs) can immensely enhance the safety and entertainment for drivers. For this purpose, in-network caching is used to expedite data delivery. In this work, a distributed probability-based caching with content-location-awareness (DPC-CLA) is proposed for efficient data delivery in VNDNs, where the roadside-units (RSUs) with caching capabilities can accurately access the relatively popular contents of the received packets by normalizing the reciprocal sum of the request hops in an indefinite period. In addition, the RSUs can also perceive the surrounding cache locations using the weighted recursive sum of the neighbouring cache intervals. Simulation results show that the proposed DPC-CLA performs better than four existing caching mechanisms in terms of the average number of hops and the cache hit ratio.
Dynamic Modulation Scaling (DMS) is a well-known mechanism that can effectively exploit the tradeoff between communication time and energy consumption. In recent years a number of studies have suggested that DMS techniques can reduce energy consumption while maintaining performance objectives in low-power wireless transmission technologies such as those defined in IEEE 802.15.4. These studies tend to rely on theoretical or simulation DMS models to predict network performance metrics. However, there is little, if any, work that is based upon empirically verified network performance outcomes using DMS. This paper fills that gap. Our contribution is four-fold; first, using GNU~Radio and SDR hardware we show how to emulate DMS in low power wireless systems. Second, we measure the impact of varying Signal-to-Noise levels on throughput and delivery rates for different DMS control strategies. Third, using DMS we quantify the impact of distance and finally, we measure the impact of different elevations between sender and receiver on network performance. Our results provide an empirical basis for future work in this area.
Rendezvous is a fundamental building block in distributed cognitive radio networks (CRNs), where users must find a jointly available channel. Research on the rendezvous problem has focused so far on minimizing the time to rendezvous (to find a suitable channel) or on maximizing the degree (number of channels on which rendezvous can take place). In this paper, we model the rendezvous problem in a more realistic way that acknowledges the fact available channels may suffer from interference, and interference may vary among users in different locations over time. In this setting, CRNs benefit from rendezvous methods that find a quiet channel, which supports high symbol rates and does not suffer much from dropped packets. We propose algorithms that achieve this goal for both initial rendezvous problem (users share no prior information) and continuous rendezvous problem (users who have already established a link must vacate the channel and seek another). We propose both deterministic and randomized methods based on mapping the channel set to a larger set in a way that gives preference to quiet channels. This technique allows us to add interference-awareness to existing rendezvous algorithms. We analyze the new algorithms and substantiate our analyses through extensive simulations.
Cognitive radio (CR) technology aims to provide real-time sensing and efficient dynamic spectrum access to improve the efficiency of spectrum resource usage. However, none of the existing SDR platforms is capable of supporting CR applications while maintaining high performance and programmability. In this paper, we propose CR-GRT, an SDR platform designed for cognitive radio applications. CR-GRT supports real-time sensing, analysis, decision-making and dynamic adjustment. It also provides interfaces for extensibility. Based on CR-GRT, we implement a comprehensive sensing strategy using both PHY and MAC information. The evaluation result shows that CR-GRT has advantages in high performance and programmability.
Nowadays, mobile recommender systems running on user's smart devices have become popular. However, most implemented mechanisms require continuous user interaction to provide personalized recommendations, and thus weaken the usability. This paper provides an innovative approach for taking advantage of user's movement data as implicit user feedback for deriving recommendations. By means of a real-world museum scenario a beacon infrastructure for tracking sojourn times is presented. Then we show how sojourn times can be integrated in both collaborative filtering and content-based mechanism approaches. An exhaustive experimental evaluation shows the suitability of our approach.
Building's density, as its number of nodes at a specific period, is a significant parameter that affects mobile and smart applications performances and evaluations. Consequently, the buildings' temporal density predictions and their nodes spatial distribution modeling have to follow real-world scenarios to provide a realistic evaluation. However, there is lack of real-world building-level density studies that examine these aspects thoroughly. As a result, this work is a data-driven study that investigates the temporal density predictability and spatial density distributions of more than 100 real buildings with ten different categories, over 150 days across three semesters. The study covers the buildings nodes' temporal modeling and predictions, and their spatial distributions in the building. Seasonal predictive models are utilized to predict hour-by-hour density for a variable length of consequent periods using training data with different lengths. The models include Seasonal Naive, Holt-Winters' seasonal additive, TBATS, and ARIMA-seasonal. The results show that the Seasonal Naive model is often selected as the best predictive model when training phase covers a shorter period. For example, Seasonal Naive predicted with the least error in 73%, 63% and 57% of cases in summer, spring, and fall respectively when using only one week to predict its consecutive five weeks with mean normalized error 25% on average. However, when using five weeks of data to predict the sixth week, the TBATS model predicted with the least error in 60%, 54% and 43% of cases in fall, spring and summer respectively with mean absolute error 19% on average. When investigating the spatial density distributions, power law, log-logistic and lognormal distributions are usually selected as the first best-fit distributions for 82%, 65%, 62% of buildings in the summer, spring and fall respectively.
In the future, libraries and warehouses will gain benefits from the spatial location of books and merchandises attached with RFID tags. Existing localization algorithms, however, usually focus on improving positioning accuracy or the ordering one for RFID tags on the same layer. Nevertheless, books or merchandises are placed on the multilayer in reality and the layer of RFID tagged object is also an important position indication. To this end, we design PRMS, an RFID based localization system which utilizes both phase and RSSI values of the backscattered signal provided by a single antenna to estimate the spatial position for RFID tags. Our basic idea is to gain initial estimated locations of RFID tags through a basic model which extracts the phase differences between received signals to locate tags. Then an advanced model is proposed to improve the positioning accuracy combined with RF hologram based on basic model. We further change traditional deployment of a single antenna to distinguish the features of RFID tags on multilayer and adopt a machine learning algorithm to get the layer information of tagged objects. The experiment results show that the average accuracy of layer detection and sorting at low tag spacing ($2\sim8$cm) are about 93% and 84% respectively.
The Message Queueing Telemetry Transport (MQTT) publish/subscribe protocol is the de facto standard at the application layer for IoT, M2M and wireless sensor networks applications. This demonstration showcases MQTT+, an advanced version of MQTT which provides an enhanced protocol syntax and enriches the broker with data filtering, processing and aggregation functionalities. Such features are ideal in all those applications in which edge devices are interested in performing processing operations over the data published by multiple clients, where using the original MQTT protocol would result in unacceptably high network bandwidth usage and energy consumption for the edge devices. MQTT+ is implemented starting from an open source MQTT broker and evaluated in different application scenarios which are demonstrated live using the Node-RED IoT prototyping framework.M