How Dense Should a Sensor Network Be for Detection with Correlated Observations?
Academic Article

 Overview

 Identity

 Additional Document Info

 View All

Overview
abstract

A detection problem in sensor networks is considered, where the sensor nodes are placed on a line and receive partial information about their environment. The nodes transmit a summary of their observations over a noisy communication channel to a fusion center for the purpose of detection. The observations at the sensors are samples of a spatial stochastic process, which is one of two possible signals corrupted by Gaussian noise. Two cases are considered: one where the signal is deterministic under each hypothesis, and the other where the signal is a correlated Gaussian process under each hypothesis. The nodes are assumed to be subject to a power density constraint, i.e., the power per unit distance is fixed, so that the power per node decreases linearly with the node density. Under these constraints, the central question that is addressed is: how dense should the sensor array be, i.e., is it better to use a few highcost, highpower nodes or to have many lowcost, lowpower nodes? An answer to this question is obtained by resorting to an asymptotic analysis where the number of nodes is large. In this asymptotic regime, the GärtnerEllis theorem and similar largedeviation theory results are used to study the impact of node density on system performance. For the deterministic signal case, it is shown that performance improves monotonically with sensor density. For the stochastic signal case, a finite sensor density is shown to be optimal. © 2006 IEEE.
author list (cited authors)

Chamberland, J., & Veeravalli, V. V.
citation count
publication date
publisher
published in
Identity
Digital Object Identifier (DOI)
Additional Document Info
start page
end page
volume
issue