The Intelligent Systems Laboratory (ISL) at RPI performs theoretical developments in computer vision, machine learning with probabilistic graphical models, and in their applications to different fields. Specifically, in computer vision, our current research focuses on developing advanced computer vision techniques for nonverbal human behavior analysis and recognition, including face detection, recognition, and tracking, facial landmark detection, facial expression recognition, face pose estimation, eye tracking, body detection and tracking, and body gesture and human action recognition. Theoretically, our research focuses on formulating computer vision problems as structured prediction problem using graphical models, and on combining model-based vision with data-driven deep learning. Past research in computer vision includes image segmentation, camera calibration, pose estimation, object tracking, 3D reconstruction, and feature detection. In machine learning, our research focuses on learning and inference with probabilistic graphical models (PGMs). Specifically, we focus on developing robust, efficient, and scalable methods for both local and global PGM structure learning. Our current research focuses on learning the graphical models by combining quantitative and qualitative data, learning an unified probabilistic model consisting of both directed and undirected graph, and efficient learning for deep probabilistic graphical models. For inference, we focus on developing efficient exact and approximate inference methods for large models as well as active inference. Finally, in applications, we have applied the computer vision and machine learning technologies to various applications, including natural human computer interaction, human state monitoring and prediction, companion/personal robots, driver behavior estimation and prediction, security and surveillance, and information fusion for situation awareness and decision making.
From systems perspective, we concentrate on two aspects of an intelligent system: sensing (perception) and understanding. For sensing, we develop computer vision algorithms to compute various visual cues (e.g. motion, shape, pose, position, and identity) typically characterizing the state of the objects. Given these visual observations, we then develop graphical models to model the relationships between the sensory observations and the high level situation that produces the observations as well as to model the related contextual information. Finally, high level visual understanding and interpretation are performed through a probabilistic inference using the graphical model and the available sensory observations.