Seeing Invisible Poses: Estimating 3D Body Pose from Egocentric Video

Hao Jiang and Kristen Grauman

Understanding the camera wearer's activity is central to egocentric vision, yet one key facet of that activity is inherently invisible to the camera---the wearer's body pose. Prior work focuses on estimating the pose of hands and arms when they come into view, but this 1) gives an incomplete view of the full body posture, and 2) prevents any pose estimate at all in many frames, since the hands are only visible in a fraction of daily life activities. We propose to infer the "invisible pose" of a person behind the egocentric camera. Given a single video, our efficient learning-based approach returns the full body 3D joint positions for each frame. Our method exploits cues from the dynamic motion signatures of the surrounding scene---which changes predictably as a function of body pose---as well as static scene structures that reveal the viewpoint (e.g., sitting vs. standing). We further introduce a novel energy minimization scheme to infer the pose sequence. It uses soft predictions of the poses per time instant together with a non-parametric model of human pose dynamics over longer windows. Our method outperforms an array of possible alternatives, including deep learning approaches for direct pose regression from images.

Videos

The following videos are encoded using H.264. The easiest way to view them is to right click on the link, then download them, and play the videos using VLC player.

dataset

To view the poses captured from Kinect, use the following matlab scripts:
    % replace frame_num by the one to be shown  
    xyz = load(sprintf('p%d.txt', frame_num));
    xyz = reshape(xyz, 3, 25);                                 
    plot3(xyz(1,:), xyz(3,:), xyz(2,:), '.', 'markersize', 40);
    axis equal 

Code