An Efficient 3D Human Pose Retrieval and Reconstruction from 2D Image-Based Landmarks

In: Sensors (Apr. 2021), 21:7
 

Abstract

We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in a single image, we devise a data-driven framework to infer the corresponding 3D human pose. To this end, we first normalize 3D human poses from Motion Capture (MoCap) dataset by eliminating translation, orientation, and the skeleton size discrepancies from the poses and then build a knowledge-base by projecting a subset of joints of the normalized 3D poses onto 2D image-planes by fully exploiting a variety of virtual cameras. With this approach, we not only transform 3D pose space to the normalized 2D pose space but also resolve the 2D-3D cross-domain retrieval task efficiently. The proposed architecture searches for poses from a MoCap dataset that are near to a given 2D query pose in a definite feature space made up of specific joint sets. These retrieved poses are then used to construct a weak perspective camera and a final 3D posture under the camera model that minimizes the reconstruction error. To estimate unknown camera parameters, we introduce a nonlinear, two-fold method. We exploit the retrieved similar poses and the viewing directions at which the MoCap dataset was sampled to minimize the projection error. Finally, we evaluate our approach thoroughly on a large number of heterogeneous 2D examples generated synthetically, 2D images with ground-truth, a variety of real in-the-wild internet images, and a proof of concept using 2D hand-drawn sketches of human poses. We conduct a pool of experiments to perform a quantitative study on PARSE dataset. We also show that the proposed system yields competitive, convincing results in comparison to other state-of-the-art methods.

Stichwörter: 3D articulated pose estimation, 3D human pose retrieval, Feature sets, Knowledge-base, Motion capture, Optimization

Bilder

Paper herunterladen

Paper herunterladen

Bibtex

@ARTICLE{yasin-2021a,
    author = {Yasin, Hashim and Kr{\"u}ger, Bj{\"o}rn},
     title = {An Efficient 3D Human Pose Retrieval and Reconstruction from 2D Image-Based Landmarks},
   journal = {Sensors},
    volume = {21},
    number = {7},
      year = {2021},
     month = apr,
  keywords = {3D articulated pose estimation, 3D human pose retrieval, Feature sets, Knowledge-base, Motion
              capture, Optimization},
  abstract = {We propose an efficient and novel architecture for 3D articulated human pose retrieval and
              reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an
              in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in a single image,
              we devise a data-driven framework to infer the corresponding 3D human pose. To this end, we first
              normalize 3D human poses from Motion Capture (MoCap) dataset by eliminating translation,
              orientation, and the skeleton size discrepancies from the poses and then build a knowledge-base by
              projecting a subset of joints of the normalized 3D poses onto 2D image-planes by fully exploiting a
              variety of virtual cameras. With this approach, we not only transform 3D pose space to the
              normalized 2D pose space but also resolve the 2D-3D cross-domain retrieval task efficiently. The
              proposed architecture searches for poses from a MoCap dataset that are near to a given 2D query pose
              in a definite feature space made up of specific joint sets. These retrieved poses are then used to
              construct a weak perspective camera and a final 3D posture under the camera model that minimizes the
              reconstruction error. To estimate unknown camera parameters, we introduce a nonlinear, two-fold
              method. We exploit the retrieved similar poses and the viewing directions at which the MoCap dataset
              was sampled to minimize the projection error. Finally, we evaluate our approach thoroughly on a
              large number of heterogeneous 2D examples generated synthetically, 2D images with ground-truth, a
              variety of real in-the-wild internet images, and a proof of concept using 2D hand-drawn sketches of
              human poses. We conduct a pool of experiments to perform a quantitative study on PARSE dataset. We
              also show that the proposed system yields competitive, convincing results in comparison to other
              state-of-the-art methods.},
      issn = {1424-8220},
       url = {https://www.mdpi.com/1424-8220/21/7/2415},
       doi = {10.3390/s21072415}
}