A Tabu Search algorithm.pdf
(Size: 2.25 MB / Downloads: 86)
A Tabu Search Algorithm for Cluster Building in Wireless Sensor Networks
The main challenge in wireless sensor network deployment pertains to optimizing energy consumption when collecting datafrom sensor nodes. This paper proposes a new centralized clustering method for a data collection mechanism in wireless sensornetworks, which is based on network energy maps and Quality-of-Service (QoS) requirements. The clustering problem is modeled as ahypergraph partitioning and its resolution is based on a tabu search heuristic. Our approach defines moves using largest size cliques ina feasibility cluster graph. Compared to other methods (CPLEX-based method, distributed method, simulated annealing-basedmethod), the results show that our tabu search-based approach returns high-quality solutions in terms of cluster cost and executiontime. As a result, this approach is suitable for handling network extensibility in a satisfactory manner.
INCREASINGLY, several applications require the acquisitionof data from the physical world in a reliable and automaticmanner. This necessity implies the emergence of new kindsof networks, which are typically composed of low-capacitydevices. Such devices, called sensors, make it possible tocapture and measure specific elements from the physicalworld (e.g., temperature, pressure, humidity). Moreover,they run on small batteries with low energetic capacities.Consequently, their power consumption must be optimizedin order to ensure increased lifetime for those devices.During data collection, two mechanisms are used to reduceenergy consumption: message aggregation and filtering ofredundant data. These mechanisms generally use clusteringmethods in order to coordinate aggregation and filtering.Clustering methods belong to either one of two categories:distributed and centralized. The centralized approachassumes that the existence of a particular node iscognizant of the information pertaining to the other networknodes. Then, the problem is modeled as a graphpartitioning problem with particular constraints that renderthis problem NP-hard. The central node determines clustersby solving this partitioning problem. However, the majordrawbacks of this category are linked to additional costsengendered by communicating the network node informationand the time required to solve an optimizationproblem. In the second category, the distributed method,each node executes a distributed clustering algorithm ,, , . The major drawback of this category is thatnodes have limited knowledge pertaining to their neighborhood.Hence, clusters are not built in an optimal manner.In , Ghiasi et al. propose centralized clustering forsensor networks. They model this problem as a k-meansclustering problem, which is defined as follows : let P be aset of n data points in d-dimensional space Rd and an integerk, and the problem consists of determining a set of k pointsin Rd, called centers, to minimize the mean squared distancefrom each data point to its nearest center. Heinzelman et al. propose a centralized version of Low Energy AdaptiveClustering Hierarchy (LEACH), their data collection protocol,in order to produce better clusters by dispersing clusterhead nodes throughout the network. In this protocol, eachnode sends information regarding its current location andenergy level to the sink node, which computes the node’smean energy level, and nodes, whose energy level is inferiorto this average, cannot become cluster heads for the currentround. Considering the remaining nodes as possible clusterheads, the sink node finds clusters using the simulatedannealing algorithm  in order to find optimal clusters.This algorithm attempts to minimize the amount of energyrequired for noncluster head nodes to transmit their data tothe cluster head, by minimizing the sum of squareddistances between all noncluster head nodes and the closestcluster head.The energy map, the component that holds informationconcerning the remaining energy available in all networkareas, can be used to prolong the network’s lifetime . Intheir probabilistic model for energy consumption,Heinzelman et al.  claim that each sensor node can bemodeled by a Markov chain. They provide an equation thatcan be used by each node to calculate its energy dissipationrate, ET , for the next T time steps. With the remainingenergy, the value ET can be sent to the sink node for energymap building purposes.This paper proposes a new centralized clusteringmechanism equipped with energy maps and constrainedby Quality-of-Service (QoS) requirements. Such a clusteringmechanism is used to collect data in sensor networks. Thefirst original aspect of this investigation consists of addingthese constraints to the clustering mechanism that helps thedata collection algorithm in order to reduce energy consumptionand provide applications with the informationrequired without burdening them with unnecessary data.Centralized clustering is modeled as hypergraph partitioning.The novel method proposes the use of a tabu searchheuristic to solve this problem. The existing centralizedclustering methods cannot be used to solve this issue due tothe fact that our approach to model the problem assumesthat the numbers of clusters and cluster heads are unknownbefore clusters are created, which constitutes another majororiginal facet of this paper.The remainder of this paper is organized as follows:Section 2 summarizes the data collection mechanism.Section 3 outlines the problem formula. Section 4 describesthe tabu search adaptation. Computational experiments andresults are reported in Section 5. Section 6 concludes thispaper and delineates some of the remaining challenges.
2 DATA COLLECTION MECHANISM
Generally, sensor networks contain a large quantity ofnodes that collect measurements before sending them to theapplications. If all nodes forwarded their measurements,the volume of data received by the applications wouldincrease exponentially, rendering data processing a tedioustask. A sensor system should thus contain mechanisms thatallow the applications to express their requirements interms of the required quality of data. Data aggregation anddata filtering are two methods that reduce the quantity ofdata received by applications. The aim of those twomethods is not only to minimize the energy consumptionby decreasing the number of messages exchanged in thenetwork but also to provide the applications with theneeded data without needlessly overloading them withexorbitant quantities of messages.The aggregation data mechanism allows for the gatheringof several measures into one record whose size is less thanthe extent of the initial records. However, the resultsemantics must not contradict the initial record semantics.Moreover, it must not lose the meanings of the initialrecords. The data filtering mechanism makes it possible toignore measurements considered redundant or those irrelevantto the application needs. A sensor system provides theapplications with the means to express the criteria used todetermine measurement relevancy, e.g., an applicationcould be concerned with temperatures, which are 1) lowerthan a given value and 2) recorded within a delimited zone.The sensor system filters the network messages andforwards only those that respect the filter conditions.Applications that use sensor networks are generallyconcerned with the node measurements within a certainperiod of time. Hence, the most important key indicators insensor networks are the quality of the measurements andthe network lifetime. An application designed to record themean temperature in zones where the sensors are deployedcould be associated with a set of requirements in terms ofmeasured frequencies (e.g., the sensor system must record ameasurement every 15 minutes), in terms of a measurementdiscrepancy thresholds (e.g., the sensor system must ignoredata whose result is less than 10 percent of the previousvalue), and in terms of the sensor lifetime (e.g., measurementsmust be provided for one year).In , we propose a novel data collection approach forsensor networks that use energy maps and QoS requirementsto reduce power consumption while increasingnetwork coverage. The mechanism comprises two phases:during the first phase, the applications specify their QoSrequirements regarding the data required by the applications.They send their requests to a particular node S, calledthe collector node, which receives the application query andobtains results from other nodes before returning them tothe applications. The collector node builds the clusters,optimally using the QoS requirements and the energy mapinformation. During the second phase, the cluster headsmust provide the collector node with combined measurementsfor each period. The cluster head is in charge ofvarious activities: coordinating the data collection within itscluster, filtering redundant measurements, computingaggregate functions, and sending results to a node collector.
3 PROBLEM FORMULATION
The considered network contains a set V of m stationarynodes whose localizations are known. The communicationmodel can be described as multihop, which means thatcertain nodes cannot send measurements directly to thecollector node: they must rely on their neighbors’ service.An application can specify the following QoS requirements:1. Data collection frequency, fq. The network providesresults to the application every time the duration fqexpires.2. A measurement uncertainty threshold, mut. If thedifference between two simultaneous measurementsfrom two different nodes in the same zone (fourthrequirement) is inferior to mut, then one of them isconsidered redundant.3. A query duration, T. The network required for thequery run a total time whose value is equal to T.4. A zone size step. The step value determines the zonelength. Within a single zone, measurements areconsidered redundant. If an application requiresmore precision, it could decrease the step value oreven ignore the transfer of such value.The goal of the clustering algorithm is to 1) split thenetwork nodes into a set of clusters Gi that satisfies theapplication requirements, 2) reduce energy consumption,and 3) prolong the network lifetime. Clusters are builtaccording to the following criteria:. Maximize network coverage using the energy map;. Gather nodes likely to hold redundant measurements;. Gather nodes located within the same zone delimitedby the application.Based on those criteria, a cluster building problem (CBP)in the remainder of this paper consists of determining theset Gi that fulfills