InMoDeGAN: Interpretable Motion Decomposition Generative Adversarial Network for Video Generation (original) (raw)
[PDF] [Code]
Abstract
In this work, we introduce an unconditional video generative model, InMoDeGAN, targeted to (a) generate high quality videos, as well as to (b) allow for interpretation of the latent space. For the latter, we place emphasis on interpreting and manipulating motion. Towards this, we decompose motion into semantic sub-spaces, which allow for control of generated samples. We design the architecture of InMoDeGAN-generator in accordance to proposed Linear Motion Decomposition, which carries the assumption that motion can be represented by a dictionary, with related vectors forming an orthogonal basis in the latent space. Each vector in the basis represents a semantic sub-space. In addition, a Temporal Pyramid Discriminator analyzes videos at different temporal resolutions. Extensive quantitative and qualitative analysis shows that our model systematically and significantly outperforms state-of-the-art methods on the VoxCeleb2-mini and BAIR-robot datasets w.r.t. video quality related to (a). Towards (b) we present experimental results, confirming that decomposed sub-spaces are interpretable and moreover, generated motion is controllable.
1. Random Generation
We randomly sample different appearance noises za and motion noise sequences {zm0,zm1,...,zmT-1} on VoxCeleb2-mini in two resolutions (128x128, 64x64) and BAIR-robot.
5. Controllable Generation
In left part, a linear trajectory is provided for d1 and a sinusoidal trajectory for d511. In right part, d1 and d511 are activated oppositely. We illustrate generated videos by activating d1, d511, as well as both directions, respectively, while all other directions maintain deactivated (set alpha to 0). The related results indicate that the robot arm can indeed be controlled directly with different trajectories.