Generative Image Dynamics
CVPR• 2024
Abstract
We present an approach to modeling an image-space prior on scene motion. Our
prior is learned from a collection of motion trajectories extracted from real
video sequences depicting natural, oscillatory dynamics such as trees, flowers,
candles, and clothes swaying in the wind. We model this dense, long-term motion
prior in the Fourier domain:given a single image, our trained model uses a
frequency-coordinated diffusion sampling process to predict a spectral volume,
which can be converted into a motion texture that spans an entire video. Along
with an image-based rendering module, these trajectories can be used for a
number of downstream applications, such as turning still images into seamlessly
looping videos, or allowing users to realistically interact with objects in
real pictures by interpreting the spectral volumes as image-space modal bases,
which approximate object dynamics.