Saturday July 22, 2017

Hazardous plumes, including radiation, biological and chemical plumes that arise from natural disasters, industry accidents and terrorist attacks pose great threats to people as well as to our environment. The capability to detect, identify and track these plumes or detection of pollution by identifying leaks in petroleum pipes is important for developing systems and protocols that protect people and our environment against such plumes. Sensor network based systems provide a promising solution to this problem. In this project, we have designed, implemented and tested algorithms and software that provide core technology for detection and tracking applications using advanced sensor network.

Contribution 0:

Oblivious network design is an important network to solve many sensor network algorithms for real time applications. This idea was resulted as a new invention that focuses on problems arising in the realm of modern sensor networks for efficient computation as well as communication. The oblivious network routing problem which came out of this grant has many important applications in the context of detection and tracking algorithms for many of the applications that was focused in this proposal. Smart grid is one application that we have used to test this core technology which is a the intersection of combinatorial optimization and theoretical computer science. Many papers have been published on this area during 2013.

Contribution 0.1:

We also explored using this oblivious network design for other interdisciplinary problems, for example, in the design of easily accessible prototype desktop cyber platforms as a multi-disciplinary decision-support tool to achieve sustainability under extreme climatic conditions in complex coastal-urban environments. This might help us to detect hazardous conditions in the complex coastal environments.

Contribution 0.2:

We were able to use the sensor network design in the context of oblivious networks for detecting hazardous conditions has also been applied to design a secure real-time load management system (RTLM) that maximize penetration of distributed renewable resources (DRR), balance supply and demand, and prevents malicious cyber-attacks on the smart grid infrastructure. This novel study employs a combination of computational approaches, mathematical methods, security protocols, load balancing algorithms, and GPU and multicore processing on large data sets, all in the context of efficient algorithms for detecting hazardous plumes, including radiation, biological and chemical plumes that arise from natural disasters, industry accidents and terrorist attacks pose great threats to people as well as to our environment.

Contribution 1:

Unsupervised data collected from sensors applications are transmitted in short streams, and with the advent of Data-mining and Machine Learning algorithms, Stream Learning has justified its need in improved preprocess and provides measurements with a context-based stream QoD metric. Stream Learning is a fast growing concept primarily due to increasing availability of location aware functionality in mobile applications, which generate large scale machine generated data in everyday from wireless, mobile GPS and sensors. Unlike databases,these categories of information in streams are real-time,constantly changing and probabilistic in nature. In a survey in 2009, Gartner published that the data will grow over 650%, which may lead to an information overload for existing computation and bandwidth standards. Our key findings are that Similarity Patterns Of Trajectories in Label-less Sensor Streams (SPOTLESS) framework parameter estimates have better precision as they use lower bound of Quality of Data (QoD) and Quality of Information (QoI) thresholds, when it comes to correcting spatial measurement errors. These findings are used to properly design context dependent multi-source broadcast in wireless sensor networks. This work was jointly done with LSU under the title multi-source broadcast in Wireless Networks.

Contribution 2:

Recent technological advances, in particular mobile devices and online social networks, have paved the way toward a smarter management of resources in today’s cities. As population density grows and natural disasters and man-made incidents (e.g., hurricanes, earthquakes, riots impact increasing number of people, maintaining the safety of citizens, an essential smart city component, becomes a problem of paramount significance and difficulty. In this paper we aim to enable the vision of smart and safe cities, by exploiting mobile and social networking technologies to securely and privately extract, model and embed real-time public safety information into quotidian user experiences. We first propose novel approaches to defining location and user based safety metrics.

Contribution 3:

We explored the dynamic aspects of cooperative routing in Ad-Hoc Wireless Sensor Networks.

  • ACTM: Anonymity Cluster based Trust Management in Wireless Sensor Networks
  • Dynamic Cooperative Routing (DCR) in Wireless Sensor Networks.
  • Secure Reputation Update for Target Localization in Wireless Sensor Networks.

Contribution 4:

When the cyber and the sensor network are integrated, greater computation power are available (through the cyber network) for solving problems that are difficult to deal with by the sensor network alone. We designed a new framework that employs time-dynamic Markov random field (TD-MRF) for sensor network detection and tracking problems under uncertainty. The framework enables modeling of spatial and temporal correlations in the sensor output. We also developed a new inference algorithm for the TD-MRF model. It uses an iterative EM (expectation-maximization) approach to infer the states of the whole environment over time in a joint

fashion. Our experimental results show that detection and tracking can be improved greatly when Markov random field is employed and when spatial and temporal correlations are incorporated in the model. Details in attached PDF.

Contribution 5:

We formulated the complicated maximum-likelihood estimation problem for energy-based multiple-source localization and then proposed two new solutions, namely AP (alternative projection) and EM (expectation-maximization) algorithms. The EM algorithm can lead to a better localization performance than the AP algorithm given an appropriate initial condition. A computational complexity analysis shows that the proposed EM algorithm is much more computationally efficient than the proposed AP method, especially when the number of sources or/and the number of sensors gets large. To further investigate the robustness of these two methods, we have also derived the CRLB for this energy-based multiple-source localization problem. The CRLB analysis demonstrates that the average root-mean-square (RMS) error of the proposed EM algorithm would be much closer to the achievable minimum variance (CRLB) than the RMS error of the proposed AP method in various conditions.

Contribution 6:

We made an attempt to find a new Gaussianity measure, which depends on very few parametric estimates and that can be very robust for sparse data. Our new measure was derived from the Kullback-Leibler divergence between the Gaussian function and the generalized Gaussian function, together with the skewness. We also presented an application of the Gaussianity measures for weak signal detection. Our computer simulation results showed that our proposed new Gaussianity test, namely KGGS (Kullback-Leibler-Divergence Gaussian Generalized-Gaussian Skewness) test, would lead to the best performance among all other existing tests. The weak signal detection study was a general case, which implied that our proposed KGGS test could be used for the general signal detection for wireless or sensor communications. More specifically, for example, our KGGS test could be adopted into spectrum sensing technology for cognitive radio, because most of the spectrum sensing studies were based on Gaussian noise assumptions. One may thus use our KGGS test to check whether or not the received signal satisfies the Gaussian distribution for spectrum sensing, enabling primary-user detection.

Contribution 7:

Impact of Integrity Attacks on Real-Time Pricing in Smart Grids: The efficiency of smart grids can be improved by passing on real-time wholesale electricity prices to end consumers. However, recent studies show that systems using real-time pricing may suffer from instability, in which the price oscillations may lead to excessive reserve capacity requirements or even physical damage to the power systems. We focus on examining the source of this instability. We analyzed basic conditions for attackers to destabilize the RTP system as a whole, if the price signals to a fraction of the consumers are subject to integrity attacks (during network transmissions and at vulnerable smart meters). We showed that the system is at risk of being destabilized only if the adversary may advertise price signals to smart meters which have reduced values, or by providing old prices to more than half of the consumers. We further analyzed the case where the price signals to suppliers are compromised. The new results show that the system can be destabilized only if the adversary may advertise price signals to suppliers which have increased values, or by providing old prices to more than half of the suppliers. Moreover, we conducted new simulations based on a 4-bus transmission system with locational marginal prices. The simulation results show that the integrity attacks can result in significantly increased power flows and power losses on the transmission lines.

Contribution 8:

Privacy-aware Supply of Information for Smart Grid: One of the important functions of a smart grid is to allow the system operator to have real-time monitoring of the whole grid, including the sum of demand of the customers. This is achieved by installing smart meters at every household, which report readings back to the operator at a much higher frequency than current practice. However, such frequent reporting of energy usage could reveal private information of the occupants. Much work has been done in the literature to provide differential privacy (DP) to smart meter users, but the proposed approaches are not fault-tolerant, achieve only partial fault tolerance, achieve fault tolerance with high communication overhead, and/or introduce large errors in the reported values. Fault tolerance is important because smart meters as cheap consumer products are prone to errors, and their communication channel could also be interfered with. We developed a proactive fault-tolerant privacy-preserving approach to address these issues. Our protocol achieves fault tolerance against general device and communication failures, and does so with much reduced bandwidth overheads than competing approaches.

Contribution 9:

Optimal Activation and Routing for Trustworthy Sensor Networks: Sensor nodes are inherently unreliable and often deployed in insecure environments. Thus, they may report untrustworthy or inconsistent data. Assessing the trustworthiness of sensor data items can allow reliable sensing or monitoring of physical phenomena. A provenance-based trust framework can evaluate the trustworthiness of data items and sensor nodes based on the intuition that two data items with similar data values but with different provenance (i.e., forwarding path) can be considered more trustworthy. Forwarding paths of data items generated from redundantly deployed sensors should consist of trustworthy nodes and remain dissimilar. Unfortunately, operating many sensors with dissimilar paths consumes significant energy. We have formulated an optimization problem to identify a set of sensor nodes and their corresponding paths toward the base station that achieve a certain trustworthiness threshold, while keeping the energy consumption of the network minimal. We prove the NP-hardness of this problem and propose ERUPT, a simulated annealing solution. Simulation results show that ERUPT achieves high trustworthiness, while reducing expected number of transmissions by at least 38% with respect to current approaches.

Contribution 10:

Unsupervised Residential Power Usage Monitoring: As a key technology of home area networks (HANs) in smart grids, fine-grained power usage monitoring can improve the efficiency of electricity use in several ways. The fine-grained usage information helps customers foster conservation and helps utility providers diagnose issues with homes' energy efficiency. We designed, implemented, and evaluated Supero - a residential power usage monitoring system that (i) employs inexpensive wireless sensors in homes in an ad hoc manner and (ii) can achieve fine-grained power usage monitoring without resorting to supervised in situ training. Supero utilizes smart meters to measure real-time total household power consumption and inexpensive light and acoustic sensors to detect interesting events involving appliances. It uses multi-sensor fusion to correlate data collected by sensors and reduce possible sensing errors. By using advanced unsupervised clustering algorithms, Supero analyzes the signal signatures of different appliances and identifies the events generated by the same appliance. Moreover, Supero autonomously associates the classified events with the appliances through an optimization algorithm. These unsupervised algorithms work together to disaggregate the total household energy consumption into usage by individual appliances. We prototyped Supero using a network of TelosB/Iris motes and a smart meter, and evaluated the prototype in five real homes. Supero estimates energy consumption with error always less than 7.5% and can be quickly deployed by non-professionals with considerable flexibility. This work was published at IEEE International Conference on Pervasive Computing and Communications (PerCom) 2013, one of the top conferences on pervasive computing. It was selected as one of three best paper candidates out of 170 submissions.

Contribution 11:

Scalable Load Disaggregation Systems Using Distributed Electrical Signature Detection: We created and demonstrated a new approach that uses distributed sensing of harmonic electrical signatures for load disaggregation, based on the observation that a particular combination of end loads. on/off states generates a unique fingerprint in current waveform harmonics. Given that observation, we proposed a mathematical framework and related algorithms to estimate the states of individual loads using a hidden Markov model. Then, we inferred the energy consumption of the loads by combining the on/off state estimates with low-frequency measurements by a power meter installed at the main feeder. We measured the harmonic signature at each sub-branch of the electricity network using a battery-powered mote sensor installed at that sub-branch. The signature is extracted locally by the mote and sent to a base station through a wireless connection. This local processing allows our solution to potentially scale to many sub-branches because it compresses large per-branch high-frequency current waveform data into lightweight harmonic signatures, so that the network communication can be economical even for large deployments.

Contribution 12:

We evaluated the performance of a proof-of-concept testbed implementation of the scheme with five common household appliances. The performance of our system is comparable to that of other well-known peer systems, with less than 8% error in estimating per-load energy consumption.

Contribution 13:

Scalable Solutions of Markov Games for the Defense of Network Critical Infrastructures: We developed scalable computational techniques to speed up the solutions of Markov games for the defense of mission-critical networks. The results allow defenders to best deploy limited defense resources against the dynamic postures of an intelligent attackers.

Contribution 14:

The performance of big-data computing using a GPU may be enhanced by overlapping the time spend transferring data from the host computer to the GPU, with that spent doing the computation on the GPU and that spent getting the results back from the GPU to the host. The processing of sensor network data is an example; large amounts of data are gathered, transmitted to the gpu, incrementally processed, and results sent back to the host CPU. We have developed optimal strategies to process such big-data problems on a GPU. These strategies were tested using the multiple string matching problem, which also is of relevance to our sensor network project as one aspect of processing sensor data is to detect known patterns (strings) within the data

Contribution 15:

Fundamental algorithms such as those for sorting and matrix multiplication are widely applicable and we have developed efficient GPU and multicore algorithms for these. These algorithms run much faster than textbook single-core algorithms on the host computer. For example, our fastest GPU matrix multiply algorithm is 1000X as fast as the textbook matrix multiply algorithm.

Contribution 16:

The reconstruction of images from sensor data is also an important aspect of an effective overall sensor network system. We have developed efficient image reconstruction algorithms for GPUs using the backprojection method.