Pose estimation, tracking, and action recognition of articulated objects from depth images are important and challenging problems, which are normally considered separately. In this paper, a unified paradigm based on Lie group theory is proposed, which enables us to collectively address these related problems. Our approach is also applicable to a wide range of articulated objects. Empirically it is evaluated on lab animals including mouse and fish, as well as on human hand. On these applications, it is shown to deliver competitive results compared to the state-of-the-arts, and non-trivial baselines including convolutional neural networks and regression forest methods.
The executable of Lie-X for mouse can be downloaded here. Note that it is an alpha version which would be updated in the near future.
Chi Xu, Lakshmi Narasimhan Govindarajan, Yu Zhang, Li Cheng. Lie-X: Depth Image Based Articulated Object Pose Estimation, Tracking, and Action Recognition on Lie Groups. In International Journal of Computer Vision (IJCV), in press, 2016. [pdf]