Fast Video Object Segmentation With Temporal Aggregation Network and Dynamic Template Matching (original) (raw)

YouTube-VOS: A Large-Scale Video Object Segmentation Benchmark

ArXiv, 2018

Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatialtemporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 4,453 YouTube video clips and 94 object categories. This is by far the largest video object segmentation dataset to our knowledge and has been released at this http URL We further evaluate several existing state-of-the-art vid...

Free PDF

YouTube-VOS: A Large-Scale Video Object Segmentation Benchmark Cover Page

Free PDF

RVOS: End-To-End Recurrent Network for Video Object Segmentation Cover Page

Free PDF

YouTube-VOS: Sequence-to-Sequence Video Object Segmentation Cover Page

Detecting Temporally Consistent Objects in Videos through Object Class Label Propagation

Object proposals for detecting moving or static video objects need to address issues such as speed, memory complexity and temporal consistency. We propose an efficient Video Object Proposal (VOP) generation method and show its efficacy in learning a better video object detector. A deep-learning based video object detector learned using the proposed VOP achieves state-of-the-art detection performance on the Youtube-Objects dataset. We further propose a clustering of VOPs which can efficiently be used for detecting objects in video in a streaming fashion. As opposed to applying per-frame convolutional neural network (CNN) based object detection, our proposed method called Objects in Video Enabler thRough LAbel Propagation (OVERLAP) needs to classify only a small fraction of all candidate proposals in every video frame through streaming clustering of object proposals and class-label propagation. Source code for VOP clustering is available at https://github. com/subtri/streaming_VOP_clustering.

Free PDF

Detecting Temporally Consistent Objects in Videos through Object Class Label Propagation Cover Page

Make One-Shot Video Object Segmentation Efficient Again

ArXiv, 2020

Video object segmentation (VOS) describes the task of segmenting a set of objects in each frame of a video. In the semi-supervised setting, the first mask of each object is provided at test time. Following the one-shot principle, fine-tuning VOS methods train a segmentation model separately on each given object mask. However, recently the VOS community has deemed such a test time optimization and its impact on the test runtime as unfeasible. To mitigate the inefficiencies of previous fine-tuning approaches, we present efficient One-Shot Video Object Segmentation (e-OSVOS). In contrast to most VOS approaches, e-OSVOS decouples the object detection task and predicts only local segmentation masks by applying a modified version of Mask R-CNN. The one-shot test runtime and performance are optimized without a laborious and handcrafted hyperparameter search. To this end, we meta learn the model initialization and learning rates for the test time optimization. To achieve optimal learning be...

Free PDF

Make One-Shot Video Object Segmentation Efficient Again Cover Page

Free PDF

Two-Level Temporal Relation Model for Online Video Instance Segmentation Cover Page

Free PDF

VideoMatch: Matching based Video Object Segmentation Cover Page

Free PDF

ReMOTS: Refining Multi-Object Tracking and Segmentation (1 st Place Solution for MOTS 2020 Challenge 1) 0 Cover Page

MSN: Efficient Online Mask Selection Network for Video Instance Segmentation

2021

In this work we present a novel solution for Video Instance Segmentation(VIS), that is automatically generating instance level segmentation masks along with object class and tracking them in a video. Our method improves the masks from segmentation and propagation branches in an online manner using the Mask Selection Network (MSN) hence limiting the noise accumulation during mask tracking. We propose an effective design of MSN by using patch-based convolutional neural network. The network is able to distinguish between very subtle differences between the masks and choose the better masks out of the associated masks accurately. Further we make use of temporal consistency and process the video sequences in both forward and reverse manner as a post processing step to recover lost objects. The proposed method can be used to adapt any video object segmentation method for the task of VIS. Our method achieves a score of 49.1 mAP on 2021 YouTube-VIS Challenge and was ranked third place among...

Free PDF

MSN: Efficient Online Mask Selection Network for Video Instance Segmentation Cover Page

Free PDF

Single Shot Video Object Detector Cover Page