Our work in this area focuses on three aspects: real-time and non-intrusive human state sensing, human state modeling and prediction, and human assistance. In the area of human state sensing, we developed techniques to extract visual and physiological measurements to characterize human state from different perspectives. For visual sensing, we developed various computer vision algorithms that can measure in real time and non-intrusively various visual behaviors typically reflecting human states, including facial expression, eye movement, head movement, and upperbody movement. For physiological and behavioral sensing, we developed an emotional computer mouse that can acquire various physiological measurements including heart rates, GSR, body temperature, as well as behavioral data including mouse click pressure and frequency as human uses the mouse naturally.

For human state modeling and prediction, we developed a probabilistic framework based on the Dynamic Bayesian Networks to systematically integrate various sensory measurements, along with the related contextual and environmental information, to produce a comprehensive characterization and robust estimation of human state. For human assistance, we introduced a decision-theoretic model to simultaneously perform human state estimation and assistance. Based on an utility function that represents the trade-off between human state and assistance, the model can determine the optimal assistance to provide to human in order to maintain his/her productivity and performance.

Besides developing the core technologies for human state modeling and prediction, we developed a prototype system that is equipped with sensors of multi-modality and can recognize different human states including fatigue, stress, and workload. With funding from different sources, our work on human state sensing and prediction has been applied to different areas, including human fatigue prediction (AFOSR, ARO), driver behavior recognition (FHWA), stress, anxiety, and fatigue estimation (ARO), operator state assessment (NASA), operator's functional state monitoring and assistance (AFRL), lie detection (ONR), and human affect recognition (DARPA). Details on our work and projects may be found in the links below.

  • Automatic Human Behavior Tracking and Analysis

  • Real Time Analysis and Fusion of Data from Images for Passive Characterization of Stress, Anxiety, Uncertainty and Fatigue (Sponsor: Litech via ARO)

  • Real-Time Non-Invasive Human Fatigue Monitoring (Sponsors: Honda, AFOSR, and Darpa)

  • A Prototype Emotional Mouse for Non-intrusive Acquistion of Physiological and Behavioral Data

  • User Emotion Recognition and Assistance (Sponsor: Darpa)

  • Automatic Facial Actions Recognition

  • Vision Based Human Computer Interaction:
    Eye Detection, gaze tracking, facial feature tracking, 3D face pose tracking, facial expression analysis, and face tracking

  • Operator function state and performance modeling and recognition from psychophysiological measurements (Sponsor: NASA via IAI)

  • A Real-time Solider State and Performance Monitoring and Predication via Multi-modal Physiological and behavioral Signals (Sponsor: Air Force Research Laboratory via IAI)

  • Cross-subject workload classification using EEG signals (Sponsor: Air Force Research Laboratory)

  • Related Publications

  • Video Demos