Interactive Steering of Mesh Animations

In proceedings of 2012 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Lausanne, Switzerland, Juli 2012
 

Abstract

Creating geometrically detailed mesh animations is an involved and resource-intensive process in digital content creation. In this work we present a method to rapidly combine available sparse motion capture data with existing mesh sequences to produce a large variety of new animations. The key idea is to model shape changes correlated to the pose of the animated object via a part based statistical shape model. We observe that compact linear models suffice for a segmentation into nearly rigid parts. The same segmentation further guides the parameterization of the pose which is learned in conjunction with the marker movement. We show that predicting shape deformations from examples in this way presents an attractive alternative to traditional blend skinning approaches in animating articulated models. Besides the inherent high geometric detail, further benefits of the presented method arise from its robustness against errors in segmentation and pose parameterization. Due to efficiency of both learning and synthesis phase, our model allows to interactively steer virtual avatars based on few markers extracted from video data or input devices like the Kinect sensor.

Bilder

Paper herunterladen

Paper herunterladen

Zusätzliches Material

  • Video (AVI-Video, 132 MB)

Bibtex

@INPROCEEDINGS{voegele2012,
     author = {V{\"o}gele, Anna and Hermann, Max and Kr{\"u}ger, Bj{\"o}rn and Klein, Reinhard},
      title = {Interactive Steering of Mesh Animations},
  booktitle = {2012 ACM SIGGRAPH/Eurographics Symposium on Computer Animation},
       year = {2012},
      month = jul,
   location = {Lausanne, Switzerland},
   abstract = {Creating geometrically detailed mesh animations is an involved and resource-intensive process in
               digital content creation. In this work we present a method to rapidly combine available sparse
               motion capture data with existing mesh sequences to produce a large variety of new animations. The
               key idea is to model shape changes correlated to the pose of the animated object via a part based
               statistical shape model. We observe that compact linear models suffice for a segmentation into
               nearly rigid parts. The same segmentation further guides the parameterization of the pose which is
               learned in conjunction with the marker movement. We show that predicting shape deformations from
               examples in this way presents an attractive alternative to traditional blend skinning approaches in
               animating articulated models. Besides the inherent high geometric detail, further benefits of the
               presented method arise from its robustness against errors in segmentation and pose parameterization.
               Due to efficiency of both learning and synthesis phase, our model allows to interactively steer
               virtual avatars based on few markers extracted from video data or input devices like the Kinect
               sensor.}
}