top of page

Interactive Stereo Image Segmentation

Ran Ju, Xiangyang Xu, Yang Liu, Tongwei Ren, Gangshan Wu

Nanjing University

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig. 1. Differences between object extraction on monoscopic images and stereo images. (a) GrabCut is applied to the two images separately. The consistency can’t be well kept due to the differences between the images and the user’s inputs. (b) Stereo GrabCut is only applied to one image and produces consistent results. The segmentation results are improved owing to the depth guidance.

Abstract
This paper addresses an interactive and consistent object extraction method for stereo images. The extraction task on stereo images has two significant differences compared to that on monoscopic images. First, the segmentation for both views should be consistent. Second, stereo images have implicit depth information, which supplies an important cue for object extraction. In this paper, we propose a consistency enforcement method using contour correspondence to generate highly consistent segmentation. Besides, we leverage depth information, which is obtained by stereo matching, to give a pre-estimation of foreground and background. The pre-estimation is then used to generate accurate color statistic models to perform a graphcut based segmentation. To simplify the user interaction, we supply an interface similar to GrabCut, which only needs the user to drag a compact rectangle in most cases. The experiments show our method is gifted with fast response time, high consistency and accuracy, and thus competent for stereo image segmentation.


Download
1. Paper.
2. Dataset.

 

Results

References

[1] Rother, C., Kolmogorov, V., Blake, A.: Grabcut: Interactive foreground extraction using iterated graph cuts. In: ACM Transactions on Graphics (TOG). Volume 23., ACM (2004) 309–314.

[2] Price, B.L., Cohen, S.: Stereocut: Consistent interactive object selection in stereo image pairs. In: Computer Vision (ICCV), 2011 IEEE International Conference on, IEEE (2011) 1148–1155.

bottom of page