Affect-Sensitive, and Cognition-faithful Human Computer Interaction


Our research in human computer interaction focuses on two aspects: visual behavior understanding and human intention/needs interpretation. For the former, we focus on developing computer vision algorithms that recognize one's gaze (eye tracking), head movements (head pose), and facial expression understanding. In addition, we are also developing a prototype emotional mouse that allows to acquire various physiological and behavior data including body temperature, heart rate, GSR, and mouse click pressure and frequency. For the latter, we focus on the development of a dynamic probabilistic model based on Bayesian Networks to infer one's cognitive states (fatigue), needs and intention. The research in computer vision serves as input to the human intention understanding part, which, given the visual inputs and the available contextual information, performs reasoning under uncertainty to infer a person's needs and intention and to formulate the most appropriate action to meet the human's needs. Specifically, our research in user state modeling and inference focuses on the following aspects: 1) explicitly model uncertainties and dynamics associated with user state and sensory observations; 2) develop an affect-sensitive and cognition-faithful user modeling by integrating the the user affect state with cognitive modeling using ACT-R/PM; 3) develop mechanisms to perform active user state inference to determine in a timely and efficient manner; 4)develop information theoretic criteria to identify the best assistance to offer and the optimal timing to apply the assistance

Specifically, we are focusing on the following three issues:


1) develop computer vision techniques to automatically characterize a person's non-verbal behaviors (e.g., eyelid movement, facial expressions, head movement, gaze, etc..) and the person's affective state (e.g., fatigue, confused, happy, sad, ...). Develop the emotional mouse to acquire the physiological and behavioral data.

2)  actively infer the person's intention and needs based on a probabilistic user model and the extracted visual cues

3)  based on the inferred user's needs, provide the most  appropriate assistance to the user in a timely and efficient manner.



 Here is a summary of our latest effort in developing an intelligent user interface to dynamically model the user cognition process to infer user's cognitive state and provide assistance in a timely and efficient manner. We have produced some preliminary results in both aspects.