Two-in-One Refinement for Interactive Segmentation

Soumajit Majumder, Abhinav Rai, Ansh Khurana, and Angela Yao
In proceedings of Proc. 31st British Machine Vision Conference (BMVC20), 2020
 

Abstract

Deep convolutional neural networks are now mainstream for click-based interactive image segmentation. Most frameworks refine false negatives and false positive regions via a succession of positive and negative clicks placed centrally in these regions. We propose a simple yet intuitive two-in-one refinement strategy placing clicks on the boundary of the object of interest. As boundary clicks are a strong cue for extracting the object of interest and we find that they are much more effective in correcting wrong segmentation masks. In addition, we propose a boundary-aware loss that encourages segmentation masks to respect instance boundaries. We place our new refinement scheme and loss formulation within a task-specialized segmentation framework and achieve state-of-the-art performance on the standard datasets - Berkeley, Pascal VOC 2012, DAVIS, and MS COCO. We exceed competing methods by 6.5%, 9.4%, 10.5% and 2.5%, respectively.

Bibtex

@INPROCEEDINGS{majumder-2020-two,
     author = {Majumder, Soumajit and Rai, Abhinav and Khurana, Ansh and Yao, Angela},
      title = {Two-in-One Refinement for Interactive Segmentation},
  booktitle = {Proc. 31st British Machine Vision Conference (BMVC20)},
       year = {2020},
   abstract = {Deep convolutional neural networks are now mainstream for click-based interactive image
               segmentation. Most frameworks refine false negatives and false positive regions via a succession of
               positive and negative clicks placed centrally in these regions. We propose a simple yet intuitive
               two-in-one refinement strategy placing clicks on the boundary of the object of interest. As boundary
               clicks are a strong cue for extracting the object of interest and we find that they are much more
               effective in correcting wrong segmentation masks. In addition, we propose a boundary-aware loss that
               encourages segmentation masks to respect instance boundaries. We place our new refinement scheme and
               loss formulation within a task-specialized segmentation framework and achieve state-of-the-art
               performance on the standard datasets - Berkeley, Pascal VOC 2012, DAVIS, and MS COCO. We exceed
               competing methods by 6.5%, 9.4%, 10.5% and 2.5%, respectively.}
}