A Dual-Source Approach for 3D Pose Estimation from a Single Image

Hashim Yasin, Umar Iqbal, Björn Krüger, Andreas Weber, and Juergen Gall
In proceedings of IEEE Conference on Computer Vision and Pattern Recognition 2016 (CVPR), Las Vegas, USA, June 2016
 

Abstract

One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results and is even competitive when the skeleton structure of the two sources differ substantially.

Project page: (externhttp://pages.iai.uni-bonn.de/iqbal_umar/ds3dpose/)

Images

Download Paper

Download Paper

Bibtex

@INPROCEEDINGS{yasin-2016,
     author = {Yasin, Hashim and Iqbal, Umar and Kr{\"u}ger, Bj{\"o}rn and Weber, Andreas and Gall, Juergen},
      title = {A Dual-Source Approach for 3D Pose Estimation from a Single Image},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition 2016 (CVPR)},
       year = {2016},
      month = jun,
   location = {Las Vegas, USA},
   abstract = {One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient
               training data. In particular, collecting large amounts of training data that contain unconstrained
               images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two
               independent training sources. The first source consists of images with annotated 2D poses and the
               second source consists of accurate 3D motion capture data. To integrate both sources, we propose a
               dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval.
               In our experiments, we show that our approach achieves state-of-the-art results and is even
               competitive when the skeleton structure of the two sources differ substantially.}
}