Lei Zhang

[Home]

[Research]

[Publications]

[Miscellaneous]

 

 


 

Active Image Labeling and its Applications in Action Unit Labeling

In this research, we intended to find a way that can help the human labeler to reduce their labor for manual labeling of the groundtruth and improve the efficiency and throughput of the groundtruth labeling. We borrowed the idea of active learning based on information theory and developed a strategy to suggest the informative label to ask for from the labeler so that the labeling process can be improved. We applied this idea in labeling of facial Action Units (AUs) (see Figure 1). 

Specifically we designed a Bayesian Network model as shown in Figure 2 to systematically integrate the input from the human labeler to perform semi-automatic AU labeling. The computer will calculate a ranked list of candidate labels according to their mutual information with the whole network. The top ranked one, once labeled by the human labeler, may simultaneously correct some other labels and therefore improve the efficiency of human labeling and reduce the labor of the human labeler. Compared to the traditional manual labeling, the human labeler does not have to modify each incorrect label one by one. Instead, some labels are simultaneously corrected by one shot due to belief propagation throughout the Bayesian Network. The active labeling concept introduced here should be also applicable to many other groundtruth labeling tasks.

Description: D:\Lei Zhang\MyWebpage\index_files\AUlabeling_AU.png

Figure 1. Examples of some commonly used facial Action Units (AUs) and their interpretations

 

Description: D:\Lei Zhang\MyWebpage\index_files\AUlabeling_model.png

Figure 2. The Bayesian Network model for active interactive labeling of facial Action Units (AUs)

 

Example results:

We have performed comparative experiments on two datasets: 1) Cohn-Kanade DFAT-504 database and 2) a mixed spontaneous facial Action Unit dataset (see our ECCV paper for more details). Below are the comparison between the performance of a traditional AU labeling method and our active interactive labeling method. The active labeling method can more efficiently and effectively improve the AU labeling accuracy. 

Description: D:\Lei Zhang\MyWebpage\index_files\AUlabeling_comparison.png

Figure 3. Average AU labeling performance on the Cohn-Kanade database using the automatic AU recognition 
with only image measurements (i.e., BN model), the interactive labeling with image measurements and 
arbitrary human input (i.e. Arbitrary input), , and the interactive labeling with image measurements 
and active user input (i.e., Active input), respectively.

  

Related publications:

§   Lei Zhang, Yan Tong, Qiang Ji, "Active Image Labeling and Its Application to Facial Action Labeling", in 10th European Conference on Computer Vision (ECCV), Part II, LNCS 5303, pp. 706-719, 2008.

§   Lei Zhang, Yan Tong and Qiang Ji, "Interactive Labeling of Facial Action Units", in 19th International Conference on Pattern Recognition (ICPR'08), 2008