Fluid in video: Augmenting real video with simulated fluids

Vivek Kwatra, Philippos Mordohai, Rahul Narain, Sashi Kumar Penta, Mark Carlson, Marc Pollefeys, Ming C. Lin

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

We present a technique for coupling simulated fluid phenomena that interact with real dynamic scenes captured as a binocular video sequence. We first process the binocular video sequence to obtain a complete 3D reconstruction of the scene, including velocity information. We use stereo for the visible parts of 3D geometry and surface completion to fill the missing regions. We then perform fluid simulation within a 3D domain that contains the object, enabling one-way coupling from the video to the fluid. In order to maintain temporal consistency of the reconstructed scene and the animated fluid across frames, we develop a geometry tracking algorithm that combines optic flow and depth information with a novel technique for "velocity completion". The velocity completion technique uses local rigidity constraints to hypothesize a motion field for the entire 3D shape, which is then used to propagate and filter the reconstructed shape over time. This approach not only generates smoothly varying geometry across time, but also simultaneously provides the necessary boundary conditions for one-way coupling between the dynamic geometry and the simulated fluid. Finally, we employ a GPU based scheme for rendering the synthetic fluid in the real video, taking refraction and scene texture into account.

Original languageEnglish (US)
Pages (from-to)487-496
Number of pages10
JournalComputer Graphics Forum
Volume27
Issue number2
DOIs
StatePublished - 2008
Externally publishedYes
Event29th Annual Conference on European Association for Computer Graphics, EUROGRAPHICS 2008 - Crete, Greece
Duration: Apr 14 2008Apr 18 2008

Fingerprint

Dive into the research topics of 'Fluid in video: Augmenting real video with simulated fluids'. Together they form a unique fingerprint.

Cite this