Lei Zhang








A Bayesian Network Model for Automatic and Interactive Image Segmentation

Bayesian Network (BN) is powerful framework for probabilistic modeling. Although it has been applied to image segmentation before, it has not been used to capture the hierarchical causal relationships between multiple image entities. In this research, we proposed a multi-layer Bayesian Network model to capture the causalities among image regions, edge segments, vertices, and their respective image measurements. In addition, we enforce the simple connectivity constraint and the smooth connection constraint among intersecting edges on the object boundary. Moreover, it can be easily extended to further encode the global shape prior into the BN segmentation model. Such a BN based framework has been successfully applied for automatic image segmentation. 

More importantly, we found the BN segmentation model showed special utility in interactive image segmentation, which included a human labeler in the loop for improving the segmentation result. The human labeler can give multiple types of intervention through the BN model at any stage of the segmentation process. Such intervention is given as a sparing and incremental manner. The BN segmentation model takes such intervention as new evidence in the model and propagates the impact of such new evidence throughout the BN network to update the believes of other hidden nodes. Image segmentation is finally achieved by finding the Most Probable Explanation (MPE) solution in the model.

Different from existing interactive segmentation approach that passively relies on the human labeler to select the intervention (we called this manner as passively interactive segmentation), we developed an active image labeling strategy based on mutual information for enhancing the interactive image segmentation. The computer will calculate a ranked list of candidate interventions that can be selected to improve the image segmentation, based on the idea of active learning using information theory. The human labeler can then choose the top candidate in this ranked list as the next intervention. We call this type of interactive segmentation as actively interactive segmentation, whose empirical performance outperformed the passively interactive segmentation. 


Example results:

Here is the Bayesian Network model for image segmentation and some image segmentation results. Figure 3 shows the comparison between actively interactive segmentation and passively interactive segmentation. Our experimental results showed that the actively interactive segmentation could more efficiently and effectively improve the segmentation accuracy and required less effort of the human labeler during the interactive segmentation. 

Description: D:\Lei Zhang\MyWebpage\index_files\BN_segmentation.JPG

Figure 1. The multi-layer Bayesian Network model for image segmentation

Description: D:\Lei Zhang\MyWebpage\index_files\BN_segmentationresults.JPG

Figure 2. Some image segmentation results using the proposed BN model for segmenting the Weizmann horse images

Description: D:\Lei Zhang\MyWebpage\index_files\comparison.pngDescription: D:\Lei Zhang\MyWebpage\index_files\comparisonCow.png

Figure 3. Comparison of the actively interactive segmentation and the passively interactively segmentation on two sets of experiments: 1) on the Weizmann horse images (top) and 2) on the VOC2006 cow images (bottom).


Video Demo:

Here is a video demo of the interactive image segmentation using the BN model. (Note: If you cannot see this video in your explorer, please try to use Windows Internet Explorer 6.0 or later version.)


Related publications:

§  Lei Zhang and Qiang Ji, A Bayesian Network Model for Automatic and Interactive Image Segmentation, IEEE Transactions on Image Processing (TIP), pages 2582-2593, Vol.20, No.9, September 2011.

  • Lei Zhang and Qiang Ji, "Integration of Multiple Contextual Information for Image Segmentation using a Bayesian Network", in 3rd IEEE International Workshop on Semantic Learning Applications in Multimedia, in conjunction with CVPR 2008 (oral presentation)