Zero-Shot Anticipation for Instructional Activities

In: Proceedings of the IEEE International Conference on Computer Vision (2019)(862-871)
 

Abstract

How can we teach a robot to predict what will happen next for an activity it has never seen before? We address the problem of zero-shot anticipation by presenting a hierarchical model that generalizes instructional knowledge from large-scale text-corpora and transfers the knowledge to the visual domain. Given a portion of an instructional video, our model predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the anticipation capabilities of our model, we introduce the Tasty Videos dataset, a collection of 2511 recipes for zero-shot learning, recognition and anticipation.

Tasty Video Dataset

Bilder

Paper herunterladen

Paper herunterladen

Zusätzliches Material

Bibtex

@ARTICLE{sener2019zero,
    author = {Sener, Fadime and Yao, Angela},
     pages = {862--871},
     title = {Zero-Shot Anticipation for Instructional Activities},
   journal = {Proceedings of the IEEE International Conference on Computer Vision},
      year = {2019},
  abstract = {How can we teach a robot to predict what will happen next for an activity it has never seen before?
              We address the problem of zero-shot anticipation by presenting a hierarchical model that generalizes
              instructional knowledge from large-scale text-corpora and transfers the knowledge to the visual
              domain. Given a portion of an instructional video, our model predicts coherent and plausible actions
              multiple steps into the future, all in rich natural language. To demonstrate the anticipation
              capabilities of our model, we introduce the Tasty Videos dataset, a collection of 2511 recipes for
              zero-shot learning, recognition and anticipation.}
}