Dr. XU Chi

Senior Post Doctoral Research Fellow, in
Machine Learning For Bioimage Analysis Group,
Bioinformatics Institute, A*STAR, Singapore.

DID: (65) 6478 8356
Email: xuchi@bii.a-star.edu.sg


  • Real Time Hand Pose Estimation Using Depth Camera

We tackle the practical problem of hand pose estimation from a single noisy depth image. A dedicated three-step pipeline is proposed. We analyze the depth noises, and suggest tips to minimize their negative impacts on the overall performance. Our approach is able to work with Kinect-type noisy depth images, and reliably produces pose estimations of motions efficiently.

[more details]

  • BioImage Computing Plattform

We aim to build a real-time immersive prototype for 3D visualization and annotation of biological data, using a low-cost 3D sensor (Kinect) and 3D monitor. In particular, 3D hand gestures will be used to replace the functionality of computer mouse, and user will be immersed within the 3D biological data, to see and interact with it.
[watch video]
  • Camera Pose Estimation from Points

The Configuration of PnP is usually classified as planar case and non‐planar case. My work presented on PAMI shows that, there exists a “quasi‐singular” case which leads to significant degeneration. A non‐iterative solution is proposed, which can robustly retrieve the optimum by solving a seventh order polynomial. Firstly, it can stably deal with the planar case, ordinary 3D case and quasi‐singular case, and it is as accurate as the state‐of‐the‐art iterative algorithms with much less computational time. Secondly, it is the first non‐iterative PnP solution that can achieve more accurate results than the iterative algorithms when no redundant reference points can be used. Thirdly, large‐size point sets can be handled efficiently because it is O(n).  [more details]

  • Camera Pose Estimation from Lines

In this paper we deal with the camera pose estimation problem from a set of 2D/3D line correspondences, which is also known as PnL (Perspective-n-Line) problem. We carry out our study by comparing PnL with the well-studied PnP (Perspective-n-Point) problem, and our contributions are threefold: (1) We provide a complete 3D configuration analysis for P3L, which includes the well-known P3P problem as well as several existing analyses as special cases. (2) By exploring the similarity between PnL and PnP, we propose a new subset-based PnL approach as well as a series of linear-formulation-based PnL approaches inspired by their PnP counterparts. (3) The proposed linear-formulation-based methods can be easily extended to deal with the line and point features simultaneously.  [more details]

  • Augmented Reality (AR)

I worked as the primary system designer and developer in the Augmented Assembly project in which AR technology is used to guide the product assembly task by enhancing user’s perception. For the products with closed structure, user can directly look into the inner structure by overlapping virtual parts on the real scene. Feature points are extracted from the input image, and the the matching is based on the well known brief description.

Existing camera pose estimation methods for the widely used square marker‐based AR are either highly sensitive to noise or much time consuming. An efficient lookup table (LUT)‐based solution is implemented which achieves better stability than the most robust and accurate iterative solutions in this field.

  • Virtual Assembly System

I worked as the primary system designer and developer in the virtual assembly system project. The system contains 5 subsystems such as: CAD model converting, Assembly process planning, Virtual assembly verifying, Assembly quality checking, and Assembly information managing. Many technologies, such as collision detection, stereo vision, virtual hand, machine vision, etc. are integrated in this system.

  • Vision Guided Robotic Grasping

I worked as the primary developer of the vision subsystem of the Robotic Grasping project. The target object is detected using the machine vision method. The pose from the target to the robot hand is calculated and the pose information is used to guide the smart hand grasping the object.

  • Object Detection

Object detection using the significant gradient template matching technology. The prototyping system was implemented in Matlab platform. By integrating C++ into Matlab, the execution time of object recognition was reduced from 12s to 200ms, and then it was further optimized to 30ms by using SSE 4.2 technology.

Bioinformatics Institute (BII), a member of A*STAR's Biomedical Sciences Institutes.
30 Biopolis Street, #07-01 Matrix, Singapore 138671