Disney Research Pittsburgh together with Brown University have released a video showcasing their latest project featuring video-based 3D motion capture through biped control. In plain language, that means it’s an easier way to capture and reproduce natural movement in markerless motion capture and then replay those movements against other environments in complete 360 degree perspective all through controllers on the feet. Obviously this has implications in video games and motion capture for movies.
Marker-less motion capture is a challenging problem, particularly when monocular video is all that is available. We estimate biped control from monocular video by implicitly recovering physically plausible three-dimensional motion of a subject along with a character model (controller) capable of replaying this motion in other environments and under physical perturbations. Our approach consists of a state-space biped controller with a balance feedback mechanism that encodes control as a sequence of simple control tasks.
We illustrate our approach by automatically estimating controllers for a variety of motions directly from monocular video. To decouple errors introduced from tracking in video from errors introduced by the controllers, we also test on reference motion capture data. We show that estimation of controller structure through incremental optimization and refinement leads to controllers that are more stable and that better approximate the reference motion. We demonstrate our approach by capturing sequences of walking, jumping, and gymnastics. We evaluate the results through qualitative and quantitative comparisons to video and motion capture data.
More details in a PDF presentation here.