Make One-Shot Video Object Segmentation Efficient Again (original) (raw)

YouTube-VOS: A Large-Scale Video Object Segmentation Benchmark

ArXiv, 2018

Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatialtemporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 4,453 YouTube video clips and 94 object categories. This is by far the largest video object segmentation dataset to our knowledge and has been released at this http URL We further evaluate several existing state-of-the-art vid...

Free PDF

YouTube-VOS: A Large-Scale Video Object Segmentation Benchmark Cover Page

Free PDF

TRICKVOS: A Bag of Tricks for Video Object Segmentation Cover Page

Delving Deep Into Many-to-Many Attention for Few-Shot Video Object Segmentation

2021

This paper tackles the task of Few-Shot Video Object Segmentation (FSVOS), i.e., segmenting objects in the query videos with certain class specified in a few labeled support images. The key is to model the relationship between the query videos and the support images for propagating the object information. This is a many-to-many problem and often relies on full-rank attention, which is computationally intensive. In this paper, we propose a novel Domain Agent Network (DAN), breaking down the full-rank attention into two smaller ones. We consider one single frame of the query video as the domain agent, bridging between the support images and the query video. Our DAN allows a linear space and time complexity as opposed to the original quadratic form with no loss of performance. In addition, we introduce a learning strategy by combining meta-learning with online learning to further improve the segmentation accuracy. We build a FSVOS benchmark on the Youtube-VIS dataset and conduct experi...

Free PDF

Delving Deep Into Many-to-Many Attention for Few-Shot Video Object Segmentation Cover Page

MSN: Efficient Online Mask Selection Network for Video Instance Segmentation

2021

In this work we present a novel solution for Video Instance Segmentation(VIS), that is automatically generating instance level segmentation masks along with object class and tracking them in a video. Our method improves the masks from segmentation and propagation branches in an online manner using the Mask Selection Network (MSN) hence limiting the noise accumulation during mask tracking. We propose an effective design of MSN by using patch-based convolutional neural network. The network is able to distinguish between very subtle differences between the masks and choose the better masks out of the associated masks accurately. Further we make use of temporal consistency and process the video sequences in both forward and reverse manner as a post processing step to recover lost objects. The proposed method can be used to adapt any video object segmentation method for the task of VIS. Our method achieves a score of 49.1 mAP on 2021 YouTube-VIS Challenge and was ranked third place among...

Free PDF

MSN: Efficient Online Mask Selection Network for Video Instance Segmentation Cover Page

Free PDF

RVOS: End-To-End Recurrent Network for Video Object Segmentation Cover Page

Meta-Learning Deep Visual Words for Fast Video Object Segmentation

2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020

Personal robots and driverless cars need to be able to operate in novel environments and thus quickly and efficiently learn to recognise new object classes. We address this problem by considering the task of video object segmentation. Previous accurate methods for this task finetune a model using the first annotated frame, and/or use additional inputs such as optical flow and complex post-processing. In contrast, we develop a fast, causal algorithm that requires no finetuning, auxiliary inputs or post-processing, and segments a variable number of objects in a single forward-pass. We represent an object with clusters, or "visual words", in the embedding space, which correspond to object parts in the image space. This allows us to robustly match to the reference objects throughout the video, because although the global appearance of an object changes as it undergoes occlusions and deformations, the appearance of more local parts may stay consistent. We learn these visual wor...

Free PDF

Meta-Learning Deep Visual Words for Fast Video Object Segmentation Cover Page

Free PDF

OCVOS: Object-Centric Representation for Video Object Segmentation Cover Page

Free PDF

FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation Cover Page

Free PDF

Adaptive Memory Management for Video Object Segmentation Cover Page

Meta-DRN: Meta-Learning for 1-Shot Image Segmentation

2020 IEEE 17th India Council International Conference (INDICON), 2020

Modern deep learning models have revolutionized the field of computer vision. But, a significant drawback of most of these models is that they require a large number of labelled examples to generalize properly. Recent developments in few-shot learning aim to alleviate this requirement. In this paper, we propose a novel lightweight CNN architecture for 1-shot image segmentation. The proposed model is created by taking inspiration from well-performing architectures for semantic segmentation and adapting it to the 1-shot domain. We train our model using 4 meta-learning algorithms that have worked well for image classification and compare the results. For the chosen dataset, our proposed model has a 70% lower parameter count than the benchmark, while having better or comparable mean IoU scores using all 4 of the meta-learning algorithms.

Free PDF

Meta-DRN: Meta-Learning for 1-Shot Image Segmentation Cover Page