Active Information Fusion

 

 

Efficient Computation of Mutual Information


Active information fusion is to selectively choose the sensors so that the information gain can compensate the cost spent in information gathering. There are several ways to measure the information gain. One popular method is to measure the mutual information between the hypothesis and the sensors. However, when there are a lot of sensors, it generally requires time exponential in the number of summations to compute the mutual information exactly. Our solution is to approximate the mutual information by exploiting sensor synergy in information gain.

 

In this approach, a Markov synergy chain between sensors is defined. (Please refer to the paper for details). It represents an ideal synergy relation between sensors. Given a Markov synergy chain with a set of sensors, the mutual information for a Markov synergy chain can be computed as a sum of mutual information of pairwise sensors and singleton sensors. Also, the mutual information of the sensor set is upper-bounded by the mutual information of its corresponding Markov synergy chains. Although its corresponding Markov synergy chains are not unique, it is experimentally shown that the minimum mutual information (called least upper bound) among all the Markov synergy chains is very close to the real mutual information of the sensors. Thus, the real mutual information can be approximated by the least upper bound.

 

More …

 

Presentation Slides

 

Introduction 

Summary

Demos

Publications

BN Resources