Video Autoencoder: self-supervised disentanglement of 3D structure and motion (original) (raw)
Temporal super-resolution
Our model could be used to increase the frame rate of videos - by simply interpolating between the estimated trajectories between two video frames.
Camera stabilization
Here, we "stabilize" all video frames to a fixed viewpoint. With Video Autoencoder, we can simply warp the whole video to the first frame with the estimated pose difference.
Test on anime scenes
Our model could also work for out-of-distribution data such as anime scenes, Spirited Away.
and even paintings...
Video Autoencoder could generate results as if we are walking into Vincent van Gogh’s bedroom in Arles.
Trajectory estimation in videos
We can also estimate trajectory in videos. For each video clip, we can estimate the relative pose between every two video frames and chain them together to get the full trajectory.
Video following
Our model can also animate a single image (shown left) with the motion trajectories from a different video (shown in the middle). We call this video following.
BibTeX
@inproceedings{Lai21a,
title={Video Autoencoder: self-supervised disentanglement of 3D structure and motion},
author={Lai, Zihang and Liu, Sifei and Efros, Alexei A and Wang, Xiaolong},
booktitle={ICCV},
year={2021}
}