Mirage 2007

Computer Vision / Computer Graphics Collaboration Techniques and Applications

March, 28-30 2007

INRIA Rocquencourt, France

http://acivs.org/mirage2007/

Mirage logo

LNCS logo

DGA
IEEE
INRIA UGent Thales Goingtomeet.com Conference Directory

Mirage 2007 Abstracts

This page is regenerated automatically every 60 minutes.

Invited papers

Paper 103: Image-based Rendering for 3D Scene Analysis and Synthesis

Author(s): Peter Eisert

Image-based rendering has received considerable interest in computer graphics in order to synthesize natural looking views of a scene. Most approaches like Light Fields or Concentric Mosaics focus on static scenes to limit the number of images needed for interpolation. However, these techniques are not restricted to static scene content, especially if additional approximate geometry is available. In this talk, we present methods that use image-based rendering for facial animation. The combination of image interpolation and rough geometry models limits the requirements on storage and allows a manipulation of the content. Image-based rendering is not only used for synthesis of realistic images but also incorporated in the image-based analysis of facial expressions. In the second part of the talk, we describe how methods for image processing can be used for reconstructing 3D models from a large number of frames. Image Cube Trajectory Analysis, an extension of Epipolar Image Analysis, is presented that estimates scene geometry from video sequences with parameterized camera motion like circular movements occuring during the acquisition of Concentric Mosaics or turntable setups.

Paper 111: Video-based modelling and animation of people

Author(s): Adrian Hilton

Capture and representation of a persons appearance during movement in a form that can be manipulated for highly realistic computer animation in games and film is an open research problem. This talk will present a number of approaches that have been introduced to capture people from multiple view video using both model-based and model-free computer vision methodologies. Surface Motion Capture (SurfCap) will be introduced which allows representation and animation control of people with the captured dynamics of clothing during movement. SurfCap will be presented as an analogous technology to skeletal human motion capture using markers (MoCap) which has become a standard production tool. Surface motion graphs are used to animate people from multiple captured surface sequences allowing control of movement and action. Surface matching methods based on geometry image sequences using spherical parameterisation are used to transition between captured motion sequences and reconstruct ! skeletal movement. SurfCap's potential as a future technology for production in games and film will be discussed.


This software generating these pages is © (not Ghent University), 2002-2024. All rights reserved.

The data on this page is © Mirage 2007. All rights reserved.

The server hosting this website is owned by the department of Telecommunications and Information Processing (TELIN) of Ghent University.

Problems with the website should be reported to .

"

This page was generated on Friday March 29th, 2024 10:46:11.