词条 | Object Co-segmentation |
释义 |
In computer vision, object co-segmentation is a special case of image segmentation, which is defined as jointly segmenting semantically similar objects in multiple images or video frames[3]. ChallengesIt is often challenging to extract segmentation masks of a target/object from a noisy collection of images or video frames, which involves object discovery coupled with segmentation. A noisy collection implies that the object/target is present sporadically in a set of images or the object/target disappears intermittently throughout the video of interest. Dynamic Markov Networks-based MethodsA joint object discover and co-segmentation method based on coupled dynamic Markov Networks has been proposed recently[1], which claims significant improvements in robustness against irrelevant/noisy video frames. CNN and LSTM-based MethodsIn action localization applications, object co-segmentation is also implemented as the Segment-Tube spatio-temporal detector[7]. Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), Le et al. present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. This Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-frame segmentation masks instead of bounding boxes, offering superior spatial accuracy to tubelets. This is achieved by alternating iterative optimization between temporal action localization and spatial action segmentation. The proposed Segment-tube detector is illustrated in the flowchart on the right. The sample input is an untrimmed video containing all frames in a pair figure skating video, with only a portion of these frames belonging to a relevant category (e.g., the DeathSpirals). Initialized with saliency based image segmentation on individual frames, this method first performs temporal action localization step with a cascaded 3D CNN and LSTM, and pinpoints the starting frame and the ending frame of a target action with a coarse-to-fine strategy. Subsequently, the Segment-tube detector refines per-frame spatial segmentation with graph cut by focusing on relevant frames identified by the temporal action localization step. The optimization alternates between the temporal action localization and spatial action segmentation in an iterative manner. Upon practical convergence, the final spatio-temporal action localization results are obtained in the format of a sequence of per-frame segmentation masks (bottom row in the flowchart) with precise starting/ending frames. See also
References1. ^1 {{cite conference | last=Chen | first=Ding-Jie | last2=Chen | first2=Hwann-Tzong | last3=Chang | first3=Long-Wen | title=Video object cosegmentation | publisher=ACM Press | publication-place=New York, New York, USA | year=2012 | isbn=978-1-4503-1089-5 | doi=10.1145/2393347.2396317 | page=}} [1][2]2. ^1 2 3 4 {{cite journal | last=Liu | first=Ziyi | last2=Wang | first2=Le | last3=Hua | first3=Gang | last4=Zhang | first4=Qilin | last5=Niu | first5=Zhenxing | last6=Wu | first6=Ying | last7=Zheng | first7=Nanning | title=Joint Video Object Discovery and Segmentation by Coupled Dynamic Markov Networks | journal=IEEE Transactions on Image Processing | volume=27 | issue=12 | year=2018 | issn=1057-7149 | doi=10.1109/tip.2018.2859622 | pages=5840–5853 | url=https://qilin-zhang.github.io/_pages/pdfs/Joint_Video_Object_Discovery_and_Segmentation_by_Coupled_Dynamic_Markov_Networks.pdf}} 3. ^{{cite conference | last=Lee | first=Yong Jae | last2=Kim | first2=Jaechul | last3=Grauman | first3=Kristen | title=Key-segments for video object segmentation | publisher=IEEE | year=2011 | isbn=978-1-4577-1102-2 | doi=10.1109/iccv.2011.6126471 | page=}} 4. ^{{cite conference | last=Ma | first=Tianyang | last2=Latecki | first2=Longin Jan |title=Maximum weight cliques with mutex constraints for video object segmentation | website=IEEE CVPR 2012 | doi=10.1109/CVPR.2012.6247735 | page=}} 5. ^{{cite conference | last=Zhang | first=Dong | last2=Javed | first2=Omar | last3=Shah | first3=Mubarak | title=Video Object Segmentation through Spatially Accurate and Temporally Dense Extraction of Primary Object Regions | publisher=IEEE | year=2013 | isbn=978-0-7695-4989-7 | doi=10.1109/cvpr.2013.87 | page=}} 6. ^{{cite conference | last=Fragkiadaki | first=Katerina | last2=Arbelaez | first2=Pablo | last3=Felsen | first3=Panna | last4=Malik | first4=Jitendra | title=Learning to segment moving objects in videos | publisher=IEEE | year=2015 | isbn=978-1-4673-6964-0 | doi=10.1109/cvpr.2015.7299035 | page=}} 7. ^{{cite conference | last=Perazzi | first=Federico | last2=Wang | first2=Oliver | last3=Gross | first3=Markus | last4=Sorkine-Hornung | first4=Alexander | title=Fully Connected Object Proposals for Video Segmentation | publisher=IEEE | year=2015 | isbn=978-1-4673-8391-2 | doi=10.1109/iccv.2015.369 | page=}} 8. ^{{cite conference | last=Koh | first=Yeong Jun | last2=Kim | first2=Chang-Su | title=Primary Object Segmentation in Videos Based on Region Augmentation and Reduction | publisher=IEEE | year=2017 | isbn=978-1-5386-0457-1 | doi=10.1109/cvpr.2017.784 | page=}} 9. ^{{cite book | last=Krähenbühl | first=Philipp | last2=Koltun | first2=Vladlen | title=Computer Vision – ECCV 2014 | chapter=Geodesic Object Proposals | publisher=Springer International Publishing | publication-place=Cham | year=2014 | isbn=978-3-319-10601-4 | issn=0302-9743 | doi=10.1007/978-3-319-10602-1_47 | pages=725–739}} 10. ^{{cite journal | last=Xue | first=Jianru | last2=Wang | first2=Le | last3=Zheng | first3=Nanning | last4=Hua | first4=Gang | title=Automatic salient object extraction with contextual cue and its applications to recognition and alpha matting | journal=Pattern Recognition | publisher=Elsevier BV | volume=46 | issue=11 | year=2013 | issn=0031-3203 | doi=10.1016/j.patcog.2013.03.028 | pages=2874–2889}} 11. ^1 2 3 {{cite journal | last=Wang | first=Le | last2=Duan | first2=Xuhuan | last3=Zhang | first3=Qilin | last4=Niu | first4=Zhenxing | last5=Hua | first5=Gang | last6=Zheng | first6=Nanning | title=Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation | journal=Sensors | publisher=MDPI AG | volume=18 | issue=5 | date=2018-05-22 | issn=1424-8220 | doi=10.3390/s18051657 | page=1657 | url=https://qilin-zhang.github.io/_pages/pdfs/Segment-Tube_Spatio-Temporal_Action_Localization_in_Untrimmed_Videos_with_Per-Frame_Segmentation.pdf}} Material was copied from this source, which is available under a [https://creativecommons.org/licenses/by/4.0/ Creative Commons Attribution 4.0 International License]. [11] 8 : Image segmentation|Computer vision|Applications of computer vision|Image processing|Machine vision|Film and video technology|Applied machine learning|Cognition |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。