Internet of Things (IoT) devices in clustered wireless networks can be compromised by compromising the gateway which they are associated with. In such scenarios, an adversary who has compromised the gateway can affect the network's performance by deliberately dropping the packets transmitted by the IoT devices. In this way, the adversary can actually mimic a bad radio channel. Hence, the affected IoT device has to retransmit the packet which will drain its battery at a faster rate. To detect such an attack, we propose a centralized detection system in this paper. It uses the uplink packet drop probability of the IoT devices to monitor the behavior of the gateway with which they are associated. The detection rule proposed is given by the generalized likelihood ratio test, where the attack probabilities are estimated using maximum likelihood estimation. Results presented show the effectiveness of the proposed detection mechanism and also demonstrate the impact of the choice of system parameters on the detection algorithm.
People have been spending much time on free-weight exercises that can strengthen the muscles, connective tissues and tendons. To decrease the risk of injury and reap the benefit of free-weight exercises, a monitoring system which helps users exercise scientifically is necessary. Wearable sensors or changes of Radio Frequency (RF) signal have been exploited for activity sensing in prior work. However, wearable sensors methods are intrusive and may bother users, while the RF-based methods require training data and fail to extract fine-grained features of each action. Therefore, our goal is to design a system which is non-intrusive, privacy-insensitive and non-training. Tracking the free-weight equipment may meet the challenge of large accuracy loss, only by extending existing RF-based 2D tracking methods to 3D space. Instead, regarding the 3D moving as 1D moving simplifies the problem. We find that most actions in free-weight exercises can be divided into two kinds, circular and vertical motions. Therefore, this paper proposes a RFID-based system, TTBA, to track free-weight equipment instrumented with passive RFID tags. We implement a low-cost prototype of TTBA to recognize and track the two basic actions. Extensive experiments show that TTBA achieves high tracking accuracy for both motions, even mm-level accuracy for the vertical motion. And the potential TTBA has to achieve an assessment system is also showed in the practical evaluation.
Platooning is an emerging Intelligent Transportation Systems (ITS) application that can have a huge impact on road traffic. It allows vehicles to travel in a convoy manner, very close to each other with constant speeds and gaps.
It uses Vehicular Ad hoc Networks (VANETs) as the underlying communication framework. These are a type of network with a set of very specific characteristics, making it an easy target for attackers. Frequent disconnections, topology change, communication over the air among others, create vulnerabilities easily exploitable.
The goal of this paper is to present how to secure the communication between the several nodes of a platooning by using an existing security model. After a careful research and evaluation, the Vehicular Ad hoc Network Public Key Infrastructure and Attribute Based Encryption with Identity Manager (VPKIbrID) model was chosen. It provides a whole solution with tools for secure message exchange and entity authentication.
The impact of the introduced security model was evaluated using simulations with different platoon sizes and VPKIbrID modes. The several operations of the platoon will be carried using the Platoon Management Protocol (PMP).
The result analysis allowed to verify that when using a VPKIbrID mode that allows multicast/broadcast communications, the vehicles sent much fewer messages than the mode that only supports unicast. The performance results indicate that the vehicles should use VPKIbrID Public Key Infrastructure (VPKIbrID-PKI) for maneuvers and VPKIbrID Attribute Based Encryption (VPKIbrID-ABE) to exchange platoon beacons or any message exchanged to all platoon members. Moreover, the messages broadcasted in a platoon group will always have the same targets, with the same attributes, being a perfect use case for the usages of cached keys.
Recently, the Vehicular Network (VN) has gained a lot of attention from researchers around the world. By allowing wireless communication, VNs enable information exchanging among vehicles, which in turn let drivers become more aware of their surrounding road conditions. Accordingly, the road safety is improved. However, due to the fast speed and high-frequent changing direction of vehicles, the network topology of VNs has the transient nature. Hence, achieving efficient data dissemination/content delivery is a critical issue in the VNs-environment. In this article, we will introduce a novel passive roadside unit (RSU) detection-based proactive (PRDP) handover scenario. Consequently, the overhead of the handover process can be reduced, and the probability of successful connection establishment can be improved. More precisely, by combining with the extended Kalman filter (EKF), the PRDP handover protocol is designed to improve the energy efficiency of the handover procedure in the VNs-environment. We carry out intensive simulations to evaluate the performance of the proposed energy-efficient proactive handover protocol.
The work at hand proposes a set of modeling recipes to address the aggressive development time window of current and future wireless communications in cyber-physical systems. These systems pose a significant challenge in meeting Time-to-Market due to the ever more challenging requirements such as ultra-low latency for mission-critical applications, extremely high throughput demanding tons of communication and computation resources working concurrently, and, at the same time, low power consumption and small chip area for its field deployment.
This paper presents three actor-oriented design patterns for a systematic creation of implementation-aware performance models of complex real-time wireless communication systems that can be applied in existing system level frameworks.
The main benefits of this modeling approach are: 1) Time semantic model correctness where the effects of a chosen hardware platform can be taken into account. 2) Behavioral modeling completeness by construction, and 3) reduced time-to-market through reduced modeling effort, improved maintainability and testability. Furthermore, by adhering to this modeling paradigm, it is possible to easily integrate the following features for the improvement of functional safety: a) system timing diagnostics, b) an appropriate handling of timing violations, and c) a simulation-based scheduleability analysis.
To demonstrate the aforementioned benefits, a model of a real world pre-5G baseband processor for V2X communications is created in Intel CoFluent where our claims are confirmed when assessing real-time deadline compliance of possible HW implementations.
Many applications running over low-power and lossy wireless networks and wireless sensor networks (WSNs) rely heavily on a number of all-to-all communication primitives for services such as data aggregation, voting and consensus. Starting with the Chaos system, synchronous transmission-based broadcasting gossip protocols are now recognized as a technique for enabling efficient all-to-all communications in WSNs. However, despite their effectiveness, there has been relatively little analysis of this class of synchronous broadcasting gossip protocols (SBGPs). In this paper, we address this void by providing a basic theoretical framework for analysis SBGPs. Based on our derived theoretical results and previous experimental measurements, we show that the key for better performance is to increase the network connectivity as much as possible while limiting the number of concurrent transmitters. As a proof of the concept, we propose a multi-radio approach of the SBGP to achieve this purpose. We compare four multi-radio schemes of SBGP with a single radio SBGP through simulation and result has shown the convergence latency can be reduced up to 42% by utilizing multiple radios.
Orthogonal Frequency Division Multiplexing (OFDM) is a common modulation technique that is being used in many of modern wireless communications and standards due to its excellent spectral efficiency and immunity to multipath interference in fading channel. In this paper, a Frequency Hopping Orthogonal Frequency Division Multiplexing (FH-OFDM) system is proposed to enhance the performance of conventional OFDM systems in multiuser interference. Based on simulation results, the Bit Error Rate (BER) performance of the proposed FH-OFDM system is shown to be superior to that of the conventional OFDM system under conditions of multiuser interference and Additive White Gaussian Noise (AWGN) channel.
The Network Simulator version 2, also known as ns-2, is a widely used platform for network and protocol performance evaluation. Over the years it has benefited from numerous studies in improving its simulation fidelity. Nevertheless, this study discovered that ns-2's TCP simulation accuracy could be impaired substantially in cases where the first-hop link is the bottleneck. This is common in many applications where the client host uploads data to Internet servers as the uplink, e.g., wireless and mobile networks, may have far lower bandwidth than the Internet core. This work investigated this performance anomaly by dissecting and comparing ns-2's implementation against Linux implementation; and by developing extensions to ns-2 to resolve the anomaly as well as five additional updates to bring its implementation to match recent Linux implementations. Extensive verifications against experiments conducted in a physical testbed confirmed the accuracy of the extended ns-2, offering a renewed and accurate simulator for mobile and wireless networking.
This paper introduces a service slicing strategy for managing Quality of Service in LTE-based cellular networks by managing resource blocks in the uplink direction based on resource pooling. An algorithm is devised to optimize and allocate resource blocks in the uplink direction based on longterm transmission history, channel conditions, and long-term fair share of resources among different service slices (classes). The proposed service slicing mechanism can flexibly allocate network resources between different service slices. It offers an ultra-reliable low-latency service suitable for uRLLC applications, and low to medium latency services suitable for extreme mobile broadband (xMBB) and massive Internet of Things (mIoT) use cases in future 5G networks. One important merit of the proposed algorithm is that its performance does not vary with the Transmission Time Interval (TTI). This enables network designers to choose different values for TTI to achieve other design goals (such as improved powerefficiency or capacity gain) without affecting QoS.
Gossip-based packet forwarding is used in unstructured networks is to reduce traffic overhead in dense networks and to minimize early gossip termination in sparse networks. Unlike in flooding, where packets are forwarded to all the neighbors, in Gossip-based protocols packets are forwarded with some probability value p<1, to reduce redundancy. However this value has to be carefully tuned: if too small, early gossip termination is likely to occur, if too large, flooding storms can take place, as with the flooding protocol. In this work, we propose to use a forwarding probability based on local topology indicators, such as the effective node degree of the forwarding node: the choice of such probability takes into account the local topology. In a context where each node can have a different forwarding probability, another way of setting efficiently its value consists in further tuning such value for each message, based on the estimated level of completion of the corresponding communication task: to this purpose we propose to use a simple formula based on the messages hop-count. We validate these approaches by simulation using ns-2 in sparse and dense networks and show that they improve the performances in terms of traffic overhead and average end-to-end delay. In terms of packet delivery ratio, the proposed approach yields results comparable to those of the standard protocol AODV.
In the last few years, the Message Queueing Telemetry Transport (MQTT) publish/subscribe protocol emerged as the de facto standard communication protocol for IoT, M2M and wireless sensor networks applications. Such popularity is mainly due to the extreme simplicity of the protocol at the client side, appropriate for low-cost and resource-constrained edge devices. Other nice features include a very low protocol overhead, ideal for limited bandwidth scenarios, the support of different Quality of Services (QoS) and many others. However, when an edge device is interested in performing processing operations over the data published by multiple clients, the use of MQTT may result in high network bandwidth usage and high energy consumption for the end devices, which is unacceptable in resource constrained scenarios. To overcome these issues, we propose in this paper MQTT+, which provides an enhanced protocol syntax and enrich the pub/sub broker with data filtering, processing and aggregation functionalities. MQTT+ is implemented starting from an open source MQTT broker and evaluated in different application scenarios.
Data collection in Internet of Things is greatly impaired by the redundant or unwanted reportings provided by sensors deployed in urban area. The service provider/ application is typically unaware of an objects context while collecting its readings. This paper proposes CEEPS4IoT as a publish- subscribe context-aware system for data collection that takes into account the context of neighboring sensors while collect- ing data. However, rational sensors do not cooperate since sharing their readings is an energy costly operation along incurring them additional communication overhead. To cater this, we present a dynamic coalition game for sensors to collaborate and share their readings in an energy-efficient way and in return receive a reward for cooperating. We derive a stable utility for a sensor proportional to the amount of data it shares while compensating for its energy costs. Results from evaluating CEEPS4IoT in networks of up to 300 nodes suggest it as a scalable and energy efficient pub/sub system with context awareness since it conserves around 50% energy compared to existing pub/sub system.
Internet of things (IoT) is envisioned as the interconnection of the Internet with sensing and actuating devices. IoT systems are usually designed to collect massive amounts of data from multiple and possibly conflicting sources. Nevertheless, data must be refined before being stored in a repository, so as information can be correctly extracted for further uses. Knowledge fusion is an important technique to identify and eliminate erroneous data from compromised sources or any mistakes that might have occurred during the extraction process. We propose a new multisensor data fusion algorithm for IoT that supports the knowledge extraction needed to adapt knowledge graphs. This algorithm, named Athena, enhances accuracy when compared to the traditional multisensor data fusion techniques. We also discuss the role of reinforcement learn over integration on a multi-application WSAN.
Wireless networks are present everywhere but their management can be tricky since their coverage may contain holes even if the network is fully connected. In this paper we propose an algorithm that can build a communication tree between nodes of a wireless network with guarantee that there is no coverage hole in the tree. We use simplicial homology to compute mathematically the coverage, and Prim's algorithm principle to build the communication tree. Some simulation results are given to study the performance of the algorithm and compare different metrics. In the end, we show that our algorithm can be used to create coverage hole-free communication groups with a limited number of hops.
In Wireless Sensor Networks (WSNs), each node typically transmits several control and data packets in a contention fashion to the sink. In this work, we mathematically analyze and study three unscheduled transmission schemes for control packets in a cluster-based architecture named Fixed Scheme (FS), Adaptive by Estimation Scheme (AES) and Adaptive by Gamma Scheme (AGS), in order to offer QoS guarantees in terms of system lifetime (related to energy consumption) and reporting delay (related to cluster formation delay). In the literature, different adaptive schemes have been proposed, and also there is research about the appropriate value selection of the transmission probability for the cluster formation. However, it largely overlooked the minimum and maximum values for the transmission probability that entails the best performance. Based on the numerical results, we show that the threshold values are just as important in the system design as the actual value of the transmission probability in adaptive schemes (AES and AGS), to achieve QoS guarantees.
The Time-Slotted Channel Hopping (TSCH) mode, defined by the IEEE 802.15.4e protocol, aims to reduce the effects of narrowband interference and multipath fading on some channels through the frequency hopping method. To work satisfactorily, this method must be based on the evaluation of the channel quality through which the packets will be transmitted to avoid packet losses. In addition to the estimation, it is necessary to manage channel blacklists, which prevents the sensors from hopping to bad quality channels. The blacklists can be applied locally or globally, and this paper evaluates the use of a local blacklist through simulation of a TSCH network in a simulated harsh industrial environment. This work evaluates two approaches, and both use a developed protocol based on TSCH, called Adaptive Blacklist TSCH (AB-TSCH), that considers beacon packets and includes a link quality estimation with blacklists. The first approach uses the protocol to compare a simple version of TSCH to configurations with different sizes of blacklists in star topology. In this approach, it is possible to analyze the channel adaption method that occurs when the blacklist has 15 channels. The second approach uses the protocol to evaluate blacklists in tree topology, and discusses the inherent problems of this topology. The results show that, when the estimation is performed continuously, a larger blacklist leads to an increase of performance in star topology. In tree topology, due to the simultaneous transmissions among some nodes, the use of smaller blacklist showed better performance.
Due to the limited battery capacity, energy is the most crucial constraint for improving the performance of widely adopted Wireless Sensor Networks (WSNs). Hence, conserving energy and improving the energy efficiency are important in designing a sustainable WSN. In this article, by taking the advantages of the emerging energy harvest techniques, we introduce a novel energy-efficient hierarchical two-tier (HTT) energy harvesting-aided WSNs deployment scenario. In our design, two types of nodes are adopted: one is the regular battery-powered sensor node (RSN), and the other is the energy harvesting-aided data relaying node (EHN). The objective is to use only RSNs to monitor FoI, while EHNs focus on collecting the sensed data from RSNs and forwarding the gathered data to the data sink. The minimum number of EHNs is deployed based on a newly designed probability density function to minimize the energy consumption of RSNs. This, in turn, extends the lifetime of the deployed WSN. The simulation results indicate that the proposed scheme outperforms some well-known techniques in the network lifetime, while enhancing the total throughput.
The paper describes the management and control (M&C) functions of various network nodes in an end-to-end rate-adaptive video transport system. Mobile user devices download video clips by sharing the underlying network path from an ingress node. At the core software level, M&C functions realize the well-known AIMD (additive increase multiplicative decrease) based video rate control algorithm to handle congestion along the path. AIMD is exercised on the aggregated data flows at a source ingress node based on the 'loss reports' signaled from the receiver egress node. Our aggregated AIMD-based control reduces the signaling overhead, relative to the existing approaches that anchor an AIMD instance on each user device itself. This offers scalability, while improving the user-experienced QoS: such as low jitter in transfer rates and isolation against device faults. The offloading of aggregated AIMD-based control to the in-network overlay nodes also allows a reduction in the overall bandwidth usage. The software handling of 'last-mile' issues in the path between user devices and egress nodes (such as greedy users and access network channel sharing) are discussed, in a context of fine-granular video encoders in the devices. The paper also shows a virtualization of our M&C functions (as VNF modules) for deployment in large-scale video distribution networks --- such as YouTube.
The ability to transmit high volumes of data over a long distance makes WiFi mesh networks an ideal transmission solution for remote video surveillance. Instead of independently manipulating the node deployment, channel and interface assignment, and routing to improve the network performance, we propose a joint network design using multi-objective genetic algorithm to take into account the interplay of them. Moreover, we found a performance evaluation method based on the transmission capability of the WiFi mesh networks for the first time. The good agreement of our obtained multiple optimized solutions to the extensive simulation results by NS-3 demonstrates the effectiveness of our design.