Dismount Detection, Tracking, and Activity Understanding


Homeland security and military applications such as urban area surveillance (UAS) require multiple-sensors to locate, detect, and identify dismounts and deduce adversary intents. UAS is a coupled two-level problem: dismount detection and tracking (lower-level) and dismount intent inference (high level).  For dismount detection and tracking, we have developed a new context-based technique for robust detection and tracking multiple targets (including vehicles and people) from distance.  The tracker can track objects under partial occlusion, complex background with clutters, and under variable illumination conditions (IR and Visible lights).   


For dismount activity understanding, we propose to use the Mixed Probabilistic Networks (MPNs).  A MPN is a mixed graph consisting of both directed and undirected links.  MPNs, therefore, represent an extension to the directed graphical models such as Bayesian Networks and HMM as well as to the undirected graphical models such as Markov Random Fields.   Like other probabilistic graphical models, a MPN represents a joint probability distribution among a set of variables. In a MPN, nodes denote random variables, the links among nodes represent the spatial and temporal dependencies among the variables, and the model follows local Markov condition.


We propose a hierarchical framework based on MPNs to model activities, the related actions, and the image observations as well as their relationships.  Besides being able to capture both causal and non-casual relationships, the framework also allows events from different activities to be shared and reused. In addition, we propose to develop advanced machine learning methods to automatically learn both the model structure and parameters.  These will set our approach significantly apart from the current approaches in activity modeling and recognition.


Our research so far has produced software and algorithms that can recognize from low resolution image sequences simple individual actions including walking, running, standing, throwing things, jumping, bending down, getting off cars and getting on cars.  Our next step is to implement the proposed MPN models to recognize more complex high level activities.


Below are video demos of some of these algorithms:

1. Activity recognition results on CAVIER dataset

    The basic activities include "walking", "resting", "meeting together", "leaving bag" and "browsing".

                    walking                                               meeting                                          leaving bag                                  


                    browsing                                            resting


2. Activity recognition results on ISL-Activity dataset

The basic activities of ISL-Activity dataset include "walking", "running", "get off car", "enter car", "bend", "look around" and "throw".  Bellowing are the activity recognition result on two testing videos with different activities.

                                       Demo1                                                                                  Demo2


    The following demo shows different stages of the activity recognition process, including motion detection, object tracking and recognition.






---Summary of public activity/action datasets