Mirage 2009

Computer Vision / Computer Graphics Collaboration Techniques and Applications

May, 4-6 2009

INRIA Rocquencourt, France

http://acivs.org/mirage2009/

Mirage logo

LNCS logo

DGA
IEEE
INRIA UGent

Mirage 2009 Abstracts

This page is regenerated automatically every 60 minutes.

Invited papers

Paper 101: Model-based computation of plausible locomotion for living and fossilized hominids

Author(s): Franck Multon

imulation is now widely used in biomechanics to investigate human motion control. Classical methods are based on applying knowledge on either kinematic or dynamic models, such as providing the average shape of the angular trajectories while walking. Hence, these methods are based on prerequired measurements performed on living subjects. In paleo-anthropology the problem is different because most of the subjects are fossils. Hence, retrieving a plausible gait for such kind of species is really challenging, as no measurements are available. Most of the approaches consequently compare the shape of bones of various existing species in order to identify the relationship between shape and function. However, these comparative approaches generally focus on some parts of the skeleton while the locomotor system is obviously not limited to one specific joint. Evolutionary robotics was also used to simulate gaits on Australopithecus affarensis (Lucy) by tuning muscle activation patterns. Despite the interesting results, the method was based on important assumptions on muscle attachments and was limited to 2D. We have proposed a new approach that consists in separating the degrees of freedom (DOF) into to parts: the movement of the feet and the joints responsible for moving the legs. The latter DOFs are retrieved by applying an inverse kinematics framework based on global hypotheses on motion control (such as minimizing energy, satisfying kinematic constraints with the ground and taking a rest posture into account). Retrieving a plausible trajectory of the feet is performed by optimization. It consists in optimizing the trajectories of the feet to make the creature walk into predefined footprints while minimizing energy and Jerk. The two layers are connected in an optimization loop which calculates the whole-body motion. This approach has been validated on humans and chimpanzees and has been applied to Australopithecus affarensis (Lucy).

Paper 114: Procedural 3D city modeling: less is more

Author(s): Luc Van Gool

There is a quickly growing interest in the 3D modelling of cities, both existing and lost. In this talk some recent methods are discussed, that are intended to support such massive 3D modelling exercises. Several aspects are highlighted, depending whether the focus lies on the precision of 3D measurements (as in digital surveying), or the realism of visualisation. Underlying our strategies is a procedural modeling approach, which yields very compact but semantically enriched descriptions of buildings, for specific architectural styles. Attention is also paid to scalable data capture for these different applications. For instance, for digital surveying, the capture of 3D data along hundreds of km of road is necessary. We have designed a camera-equipped van with the necessary software to do so. In the case of modeling monuments - that require special attention given their rather complicated shapes - extra images are gathered automatically, without even having to specify the name or the existence of these monuments. Also, information about them is looked up automatically, e.g. to find out about the corresponding architectural style. As example case for the modelling of large-scale virtual cities, the Rome Reborn 2.0 project will be discussed, that has produced a model of the entire ancient city of Rome as it was around the 5th century AD.

Paper 185: Non-Parametric Latent Variable Models for Shape and Motion Analysis

Author(s): Raquel Urtasun

on-Parametric Latent Variable Models for Shape and Motion Analysis Abstract: Dimensionality reduction is a popular approach to dealing with high dimensional data sets. It is often the case that linear dimensionality reduction, such as principal component analysis (PCA), does not adequately capture the structure of the data. In this talk I will discuss Probabilistic Non-linear Latent Variable models in the context of 3D human body tracking, 3D shape recovery from single images, character animation and classification. First, I will describe how to use Gaussian Process Latent Variable models (GPLVMs) for learning human pose and motion priors for 3D human body tracking from monocular images. I will then show how to combine multiple local models to model the space of possible deformations of objects of arbitrary shapes, but made of the same material. This will allow us to perform monocular 3D shape recovery in the presence of complex deformations of poorly textured objects.

In dimensionality reduction approaches, the data is typically embedded in a Euclidean latent space. However for some data sets, such as human motion, this is inappropriate. We present a range of approaches for embedding data into non-Euclidean latent spaces that incorporate prior knowledge. This allows us to learn models suitable for motion generation with good generalization properties. Finally I'll present a new learning paradigm that mitigates the problem of local minima by performing continuous dimensionality reduction. By introducing a prior over the dimensionality of the latent space that encourages sparsity of the singular values, our method is able to simultaneously estimate the latent space and its dimensionality.

Regular papers

Paper 112: Tracking Human Motion with Multiple Cameras Using an Articulated Model

Author(s): Davide Moschini, Andrea Fusiello

This paper presents a markerless motion capture pipeline based on volumetric reconstruction, skeletonization and articulated ICP with hard constraints. The skeletonization produces a set of 3D points roughly distributed around the limbs' medial axes. Then, the ICP-based algorithm fits an articulated skeletal model (stick figure) of the human body. The algorithm fits each stick to a limb in a hierarchical fashion, traversing the body's kinematic chain, while preserving the connection of the sticks at the joints. Experimental results with real data demonstrate the performances algorithm.

Paper 120: Shape Recovery of Specular Surface Using Color Highlight Stripe and Light Source Coding

Author(s): Sun Yankui, Xue Chengkun, Kimachi Masatoshi, Suwa Masaki

Shape recovery of specular surface is a challenging task; camera images of these surfaces are difficult to interpret because they are often characterized by highlights. Structured Highlight approach is a classic and effective way for specular inspection, this paper suggests a new strategy to recover dense normals of a specular surface and reconstruct its shape by combining the ideas of Structured Highlight, color source coding, highlight stripe and its translations. Point sources with different colors are positioned on orbits to il-luminate a specular object surface. These point sources are scanned, and high-lights on the object surface resulting from each point source are used to derive local surface orientation. Dense normal information can be recovered by trans-lating these orbits. Some experimental system configurations are given. The simulation results show that the new method is feasible and can be used to re-construct shape of specular surface in a high precision.

Paper 122: Geometric Mesh Denoising via Multivariate Kernel Diffusion

Author(s): Tarmissi Khaled, Ben Hamza Abdessamad

We present a 3D mesh denoising method based on kernel density estimation. The proposed approach is able to reduce the over-smoothing effect and effectively remove undesirable noise while preserving prominent geometric features of a 3D mesh such as curved surface regions, sharp edges, and fine details. The experimental results demonstrate the effectiveness of the proposed approach in comparison to existing mesh denoising techniques.

Paper 123: Automatic Segmentation of Scanned Human Body Using Curve Skeleton Analysis

Author(s): Christian Lovato, Umberto Castellani, Andrea Giachetti

In this paper we present a method for the automatic processing of scanned human body data consisting of an algorithm for the extraction of curve skeletons of the 3D models acquired and a procedure for the automatic segmentation of skeleton branches. Models used in our experiments are obtained with a whole-body scanner based on structured light (Breuckmann bodySCAN, owned by the Faculty of Exercise and Sport Science of the University of Verona), providing triangulated meshes that are then preprocessed in order to remove holes and create clean watertight surfaces. Curve skeletons are then extracted with a novel technique based on voxel coding and active contours driven by a distance map and vector flow. The skeleton-based segmentation is based on a hierarchical search of feature points along the skeleton tree.

Our method is able to obtain on the curve skeleton a pose-independent subdivision of the main parts of the human body (trunk, head-neck region and partitioned limbs) that can be extended to the mesh surface and internal volume and can be exploited to estimate the pose and to locate more easily anthropometric features.

The curve skeleton algorithm applied allows control on the number of branches extracted and on the resolution of the volume discretization, so the procedure could be then repeated on subparts in order to refine the segmentation and build more complex hierarchical models.

Paper 124: Multi-view Player Action Recognition in Soccer Games

Author(s): Marco Leo, Tiziana D'Orazio, Paolo Spagnolo, Pier Luigi Mazzeo, Arcangelo Distante

Human action recognition is an important research area in the field of computer vision having a great number of real-world applications. This paper presents a multi-view action recognition framework able to extract human silhouette clues from different synchronized static cameras and then to validate them by analyzing scene dynamics. Two different algorithmic procedures were introduced: the first one performs, in each acquired image, the neural recognition of the human body configuration by using a novel mathematical tool called Contourlet transform. The second procedure performs, instead, 3D ball and player motion analysis. The outcomes of both procedures are then merged to accomplish the final player action recognition task. Experiments were carried out on several image sequences acquired during some matches of the Italian ``Serie A" soccer championship.

Paper 125: Heart Cavity Segmentation in Ultrasound Images Based on Supervised Neural Networks

Author(s): Marco Mora, Julio Leiva, Mauricio Olivares

This paper proposes a segmentation method of heart cavities based on neural networks. Firstly, the ultrasound image is simplified with a homogeneity measure based on the variance. Secondly, the simplified image is classified using a multilayer perceptron trained to produce an adequate generalization. Thirdly, results from classification are improved by using simple image processing techniques. The method makes it possible to detect the edges of cavities in an image sequence, selecting data for network training from a single image of the sequence. Besides, our proposal permits detection of cavity contours with techniques of a low computational cost, in a robust and accurate way, with a high degree of autonomy.

Paper 126: Automatic Fitting of a Deformable Face Mask Using a Single Image

Author(s): Annika Kuhl, Tele Tan, Svetha Venkatesh

We propose an automatic method for person-independent fitting of a deformable 3D face mask model under varying illumination conditions. Principle Component Analysis is utilised to build a face model which is then used within a Particle Filter based approach to fit the mask to the image. By subdividing a coarse mask and using a novel texture mapping technique, we further apply the 3D face model to fit into lower resolution images. The illumination invariance is achieved by representing each face as a combination of harmonic images within the weighting function of the particle filter. We demonstrate the performance of our approach on the IMM Face Database and the Extended Yale Face Database B and show that it outperforms the Active Shape Models approach [6].

Paper 127: Re-Projective Pose Estimation of a Planar Prototype

Author(s): Georg Pisinger, Georg Maier

We present an approach for robust pose estimation of a planar prototype. In fact, there are many applications in computer graphics in which camera pose tracking from planar targets is necessary. Unlike many other approaches our method minimizes the Euclidean error to re-projected image points. There is a number of recent pose estimation methods, but all of these algorithms suffer from pose ambiguities. If we know the positions of some points on the plane we can describe the 3D position of the planar prototype as a solution of an optimization problem over two parameters. Based on this formulation we develop a new algorithm for pose estimation of a planar prototype. Its robustness is illustrated by simulations and experiments with real images.

Paper 129: Tracking and Retexturing Cloth for Real-Time Virtual Clothing Applications

Author(s): Anna Hilsmann, Peter Eisert

In this paper, we describe a dynamic texture overlay method from monocular images for real-time visualization of garments in a virtual mirror environment. Similar to looking into a mirror when trying on clothes, we create the same impression but for virtually textured garments. The mirror is replaced by a large display that shows the mirrored image of a camera capturing e.g. the upper body part of a person. By estimating the elastic deformations of the cloth from a single camera in the 2D image plane and recovering the illumination of the textured surface of a shirt in real time, an arbitrary virtual texture can be realistically augmented onto the moving garment such that the person seems to wear the virtual clothing. The result is a combination of the real video and the new augmented model yielding a realistic impression of the virtual piece of cloth.

Paper 130: A Novel Approach to Spatio-Temporal Video Analysis and Retrieval

Author(s): Sameer Singh, Wei Ren, Maneesha Singh

In this paper, we propose a novel Spatio-Temporal Analysis and Retrieval model to extract attributes for video category classification. First, the spatial relationships and temporal nature of the video object in a frame is coded as the sequence of binary string –VRstring. Then, the similarity between shots is matched as sequential features in hyperspaces. The results show that VRstring allows us to define higher level semantic features capturing the main narrative structures of the video. We also compare our algorithm with state of the art longest common substring finding video retrieval model by Adjeroh et.al.[1] on the Minerva international video benchmark.

Paper 132: A Bag of Words Approach for 3D Object Categorization

Author(s): Roberto Toldo, Umberto Castellani, Andrea Fusiello

In this paper we propose a novel framework for 3D object categorization. The object is modeled it in terms of its sub-parts as an histogram of 3D visual word occurrences. We introduce an effective method for hierarchical 3D object segmentation driven by the minima rule that combines spectral clustering -- for the selection of seed-regions -- with region growing based on fast marching. The front propagation is driven by local geometry features, namely the Shape Index. Finally, after the coding of each object according to the Bag-of-Words paradigm, a Support Vector Machine is learnt to classify different objects categories. Several examples on two different datasets are shown which evidence the effectiveness of the proposed framework.

Paper 134: An Improved Structured Light Inspection of Specular Surfaces Based on Quaternary Coding

Author(s): Chengkun Xue, Yankui Sun

Structured light techniques with binary coding are practical to inspect the specular surfaces. The structured light approaches use a scanned array of point sources and images of the resulting reflected highlights to compute local surface orientation. Binary coding scheme is the classic scheme for efficiently coding the light sources. This paper proposes a novel quaternary coding scheme which is much more efficient than the classic binary coding scheme. In this scheme, polychromatic light sources are utilized and coded in quaternary scheme. Our experimental system is described in detail. The problem caused by the polychromatic light sources is discussed too. To solve the problem, we drew lesson from the erosion operator from the Mathematical Morphology and designed an effective algorithm. The experiment results show the new quaternary coding scheme not only keeps a very high accuracy, but also greatly improves the efficiency of the inspection of specular surface.

Paper 135: Robust Detection and Tracking of Multiple Moving Objects With 3D Features by an Uncalibrated Monocular camera

Author(s): Ho Shan Poon, Fei Mai, Yeung Sam Hung, Graziano Chesi

This paper presents an algorithm for detecting multiple moving objects in an uncalibrated image sequence by integrating their 2D and 3D information. The result describes the moving objects in terms of their number, relative position and motion. First, the objects are represented by image feature points, and the major group of point correspondences over two consecutive images is established by Random Sample Consensus (RANSAC). Then, their corresponding 3D points are reconstructed and clustering is performed on them to validate those belonging to the same object. This process is repeated until all objects are detected. This method is reliable on tracking multiple moving objects, even with partial occlusions and similar motions. Experiments on real image sequences are presented to validate the proposed algorithm. Applications of interest are video surveillance, augmented reality, robot navigation and scene recognition.

Paper 136: Automatic Golf Ball Trajectory Reconstruction and Visualization

Author(s): Tadej Zupancic, Ales Jaklic

The article presents the steps required to reconstruct a 3D trajectory of a golf ball flight, bounces and roll in short game. Two video cameras were used to capture the last parts of the trajectories including the bounces and roll. Each video sequence is processed and the ball is detected and tracked until is stops. Detected positions from both video sequences are then matched and 3D trajectory is obtained and presented as an X3D model.

Paper 137: Integrated Digital Image Correlation for the Identification of Mechanical Properties

Author(s): Hugo Leclerc, Jean-Noël Périé, Stéphane Roux, François Hild

Digital Image Correlation (DIC) is a powerful technique to provide full-field displacement measurements for mechanical tests of materials and structures. The displacement fields may be further processed as an entry for identification procedures giving access to parameters of constitutive laws. A new implementation of a Finite Element based Integrated Digital Image Correlation (I-DIC) method is presented, where the two stages (image correlation and mechanical identification) are coupled. This coupling allows one to minimize information losses, even in case of low signal-to-noise ratios. A case study for elastic properties of a composite material illustrates the approach, and highlights the accuracy of the results. Implementations on GPUs (using CUDA) leads to high speed performance while preserving the versatility of the methodology.

Paper 141: Recovery of 3D Solar Magnetic Field Model Parameter Using Image Structure Matching

Author(s): Jong Kwan Lee, G. Allen Gary

An approach to recover a 3D solar magnetic field model parameter using intensity images of the Sun's corona is introduced. The approach is a quantitative approach in which the 3D model parameter is determined via an image structure matching scheme. The image structure matching measures the positional divergence (i.e., pixel-by-pixel shortest Euclidean distance) between the real coronal loop structures in a 2D image to sets of modeled magnetic field structures to determine the best model parameter for a given region on the Sun. The approach's effectiveness is evaluated through experiments on synthetic images and a real image.

Paper 146: From Interactive Positioning to Automatic Try On of Virtual Cloth

Author(s): Tung Le Thanh, Gagalowicz André

With the large development of computer graphics hardware and virtual cloth simulation techniques, virtual try-on of garments has become possible. An Internet-based shop will reduce significantly the cost of garment manufacturing as only paid garments will be manufactured. On top of that, a client will also be able to buy exactly what he wants and garments will be fitted to him/her. In this paper, we describe a virtual try-on system that allows a particular user to try garments easily. After loading his/her 3D digital model, he will be able to select the clothes he wants from a data base and see himself/herself wearing it virtually in 3D. The proposed system consists of two major parts : first, the 2D positioning part which requires a designer's interaction; second, a 3D part, fully automatic which will be installed in the user's computer, which allows tim to access a garments'catalog and see himself wearing virtually the chosen garment.

Paper 147: Level Set Segmentation of Knee Bones Using Normal Profile Models

Author(s): Gaetano Impoco

We address the problem of segmenting bone structures from CT scans of the knee joint, in the level set framework. Our method is based on intensity profiles along the normals to the evolving contour. The evolution is guided by the similarity of image intensity profiles to profile models. The evolution stops when the intensity profiles closely match the model. The profile models are built using a manually labelled training sample.

Paper 148: Detection of Overlapped Ellipses by Combining Region and Edge Data

Author(s): Lin Zheng, Quan Liu

This paper describes an approach for detecting overlapped ellipses by combining region and edge data. The Principle Component Analysis method is used to give the shape and position of an ellipse. A region based EM iterative algorithm is proposed to calculate the number of ellipses and their initial shapes in the overlapped region. As a result, every edge point is assigned to a certain ellipse by statistics decision. Then an edge fitting algorithm is employed to refine the ellipses' geometric parameters based on the edge data. Above coarse-to-fine algorithm is applied to detect the overlapped fruits and the moving targets. The result is stable and accurate.

Paper 150: Flash Lighting Space Sampling

Author(s): Matteo Dellepiane, Marco Callieri, Massimiliano Corsini, Paolo Cignoni, Roberto Scopigno

Flash light of digital cameras is a very useful way to picture scenes with low quality illumination. Nevertheless, especially low-end cameras integrated flash lights are considered as not reliable for high quality images, due to known artifacts (sharp shadows, highlights, uneven lighting) generated in images. Moreover, a mathematical model of this kind of light seems difficult to create.

In this paper we present a color correction space which, given some information about the geometry of the pictured scene, is able to provide a space-dependent correction of each pixel of the image. The correction space can be calculated once in a lifetime using a quite fast acquisition procedure; after 3D spatial calibration, obtained color correction function can be applied to every image where flash is the dominant illuminant: a practical application (color projection on 3D models) is shown, but the correction space presents several other advantages: it is independent from the kind of light used (provided that it is bound to the camera), it gives the possibly to correct only determinate artifacts (for example color deviation) introduced by flash light, and it has a wide range of possible applications, from image enhancement to material color estimation.

Paper 152: Error Analysis of Stereo Calibration and Reconstruction

Author(s): Agnieszka Bier, Leszek Luchowski

This paper addresses the problem of the propagation of input data errors in the stereovision process and its influence on the quality of re- constructed 3D points. We consider only those particular camera calibration and 3D reconstruction algorithms which employ singular value decomposi- tion (SVD) methods. Using the SVD Jacobian estimation method developed by Papadopoulo and Lourakis, the sensitivity of both stages of the stereo- vision process is analyzed. We derive all the partial derivatives of outputs with respect to the inputs of the process and present a set of tests applying them in various stereovision conditions in order to determine their meaning for the quality of 3D reconstruction.

Paper 156: Spatio-Temporal Tracking of Faces by Stereo Vision

Author(s): Markus Steffens, Werner Krybus, Christine Kohring, Danny Morton

This report contributes a coherent framework for the robust tracking of humans' heads and faces. The framework comprises aspects of structure and motion problems, as there are feature extraction, spatial and temporal matching, re- calibration, tracking, and reconstruction. The scene is acquired through a calibrated stereo sensor. A cue processor extracts invariant features in both views, which are spatially matched by geometric relations. The temporal matching takes place via prediction from the tracking module and a similarity transformation of the features' 2D locations between both views. The head is reconstructed and tracked in 3D. The re-projection of the predicted structure limits the search space of both the cue processor as well as the re- construction procedure. Due to the focused application, the instability of calibration of the stereo sensor is limited to the relative extrinsic parameters that are re-calibrated during the re-construction process. The framework is practically applied and proven. First experimental results will be discussed and further steps of development within the project are presented.

Paper 157: Spatio-Temporal Scene Analysis based on Graph Algorithms to Determine Rigid and Articulated Objects

Author(s): Stephan Kieneke, Markus Steffens, Dominik Aufderheide, Werner Krybus, Christine Kohring, Danny Morton

We propose a novel framework in the context of structure and motion for representing and analyzing three-dimensional motions particularly for human heads and faces. They are captured via a stereo camera system and a scene graph is constructed that contains low and high-level vision information. It represents and describes the observed scene of each frame. By creating graphs of successive frames it is possible to match, track and segment main important features and objects as a structure of each scene and reconstruct these features into the three dimensional space. The cue-processor extracts feature information like 2D- and 3D-position, velocity, age, neighborhood, condition, or relationship among features that are stored in the vertices and weights of the graph to improve the estimation and detection of the features and/or objects in the continuous frames. The structure and change of the graph leads to a robust determination and analysis of changes in the scene and to segment and determine these changes even for temporal and partial occluded objects over a long image sequence.

Paper 158: Low-cost Multi-image Based 3D Human Body Modeling

Author(s): Zheng Wang, André Gagalowicz, Meijun Sun

A method for 3D human body modeling from a set of 2D images is proposed. This method is based upon the deformation of a predefined generic polygonal human mesh towards a specific one which should be very similar with the subject when projected on the input images. Firstly the user defines several feature points on the 3D generic model. Then a rough specific model is obtained via matching the 3D feature points of the 3D model to the corresponding ones of the images and deforming the generic model. Secondly the reconstruction is improved by matching the silhouette of the deformed "d model to those of the images. Thirdly, the result is refined by adopting three filters. Finally texture mapping and skinning are implemented.

Paper 160: Modified Histogram Based Fuzzy Filter

Author(s): Ayyaz Hussain, M. Arfan Jaffar, Abdul Basit Siddiqui, Muhammad Nazir, Anwar M. Mirza

In this paper, a fuzzy based impulse noise removal technique has been proposed. The proposed filter is based on noise detection, fuzzy set construction, histogram estimation and fuzzy filtering process. Noise detection process is used to identify the set of noisy pixels which are used for estimating the histogram of the original image. Estimated histogram of the original image is used for fuzzy set construction using fuzzy number construction algorithm. Fuzzy filtering process is the main component of the proposed technique. It consists of fuzzification, defuzzification and predicted intensity processes to remove impulse noise. Sensitivity analysis of the proposed technique has been performed by varying the number of fuzzy sets. Experimental results demonstrate that the proposed technique achieves much better performance than state-of-the-art filters. The comparison of the results is based on global error measure as well as local error measures i.e. mean square error (MSE) and structural similarity index measure (SSIM).

Paper 161: Color Transfer in Images Based on Separation of Chromatic and Achromatic Colors

Author(s): Jae Hyup Kim, Do Kyung Shin, Young Shik Moon

In this paper, we propose the method which transfers the color style of a source image into an arbitrary given reference image. Misidentification problem of color cause wrong indexing in low saturation. Therefore, the proposed method does indexing after image separating chromatic and achromatic color from saturation. The proposed method is composed of the following four steps. In the first step, using threshold, pixels in image are separated chromatic and achromatic color components from saturation. In the second step, separated pixels are indexed using cylindrical metric. In the third step, the number and positional dispersion of pixel decide the order of priority for each index color. And average and standard deviation of each index color be calculated. In the final step, color be transferred in Lab color space, and post processing to removal noise and pseudo-contour. Experimental results show that the proposed method is effective on indexing and color transfer.

Paper 162: Realistic Face Animation for Audiovisual Speech Applications: A Densification Approach Driven by Sparse Stereo Meshes

Author(s): Marie-Odile Berger, Jonathan Ponroy, Brigitte Wrobel-Dautcourt

Being able to produce realistic facial animation is crucial for many speech applications in language learning technologies. Reaching realism needs to acquire and to animate dense 3D models of the face which are often acquired with 3D scanners. However, acquiring the dynamics of the speech from 3D scans is difficult as the acquisition time generally allows only sustained sounds to be recorded. On the contrary, acquiring the speech dynamics on a sparse set of points is easy using a stereovision recording a talker with markers painted on his/her face. In this paper, we propose an approach to animate a very realistic dense talking head which makes use of a reduced set of 3D dense meshes acquired for sustained sounds as well as the speech dynamics learned on a talker painted with white markers. The contributions of the paper are twofold: We first propose an appropriate principal component analysis (PCA) with missing data techniques in order to compute the basic modes of the speech dynamics despite possible unobservable points in the sparse meshes obtained by the stereovision system. We then propose a method for densifying the modes, that is a method for computing the dense modes for spatial animation from the sparse modes learned by the stereovision system. Examples prove the effectiveness of the approach and the high realism obtained with our method.

Paper 163: Meshless Virtual Cloth

Author(s): Weiran Yuan, Yujun Chen, André Gagalowicz

A systematic discription of a novel physically-based virtual cloth simulation method using meshless models is carried out in this paper. This method is based upon continuum mechanics and discretized without explicit connections between nodes. The mechanical behavior of this cloth model is consistent and is independent of the resolution. Kirchhoff-Love (KL) thin shell theory is used as the basis of the cloth model. Approaches to the parametrization and boundary sewing problems are presented to suit with meshless models. Furthermore, a co-rotational method is proposed in order to take care of large deformation problems. As for the collision solution, a new shape-function-based collision detection method is developed for meshless parameterized surfaces. The experimental results show that our cloth simulation model based upon meshless methods can produce natural and realistic results.

Paper 165: New Human Face Expression Tracking

Author(s): Daria Kalinkina, Gagalowicz André

In this paper we propose a new method for precise face expression tracking in a video sequence which uses a hierarchical animation system built over a morphable polygonal 3D face model. Its low-level animation mechanism is based upon MPEG-4 specification which is implemented via local point-driven mesh deformations adaptive to the face geometry. The set of MPEG-4 animation parameters is in its turn controlled by a higher-level system based upon facial muscles structure. That allows us to perform precise tracking of complicated facial expressions as well as to produce face-to-face retargeting by transmitting the expression parameters to the different faces.

Paper 166: A Model-Based Approach for Human Body Reconstruction from 3D Scanned Data

Author(s): Thibault Luginbühl, Philippe Guerlain, André Gagalowicz

Human body scanners can quickly provide clouds of more than 200 000 points representing the human body's surface. Many new applications can be derived from the ability to build a 3D model of a real person, especially in the textile industry allowing virtual try-on approachs. However, getting a regular model, suitable for these applications from scanned data is not a straightforward task. In this paper, we propose a model-based approach to model a specific person. We use a generic model whitch is segmented and points are organized in slices. We adapt the sizes of each body limb and then fit each slice on the data, limb by limber.

Paper 167: Region-Based vs Edge-Based Registration for 3D Motion Capture by Real-Time Monoscopic Vision

Author(s): David Gómez Jáuregui, Patrick Horain

3D human motion capture by real-time monocular vision without using markers can be achieved by registering a 3D articulated model on a video. Registration consists in iteratively optimizing the match between primitives extracted from the model and the images with respect to the model position and joint angles. We extend a previous color-based registration algorithm with a more precise edge-based registration step. We present an experimental analysis of the residual error vs. the computation time and we discuss the balance between both approaches.

Paper 168: Supporting Diagnostics of Coronary Artery Disease with Multi-Resolution Image Parameterization and Data Mining

Author(s): Matjaz Kukar, Luka Sajn

Coronary artery disease has been described as one of the curses of the western world, as it is one of the most important causes of mortality. Therefore, clinicians seek to improve diagnostic procedures, especially those that allow them to reach reliable early diagnoses. In the clinical setting, coronary artery disease diagnostics is typically performed in a stepwise manner. The four diagnostic levels consist of evaluation of (1) signs and symptoms of the disease and ECG (electrocardiogram) at rest, (2) sequential ECG testing during the controlled exercise, (3) myocardial perfusion scintigraphy, and finally (4) coronary angiography, that is considered as the "gold standard" reference method. Our study focuses on improving diagnostic performance of the third diagnostic level. Myocardial scintigraphy is non invasive; it results in a series of medical images that are relatively inexpensively obtained. In clinical practice, these images are manually described (parameterized) by expert physicians. In the paper we present an innovative alternative to manual image evaluation – an automatic image parameterization in multiple resolutions, based on texture description with specialized association rules. Extracted image parameters are combined into more informative composite parameters by means of principle component analysis, and finally used to build automatic classifiers with machine learning methods. Our experiments with synthetic datasets show that association-rule-based multi-resolution image parameterization equals or surpasses other state-of-the-art methods for finding multiple informative resolutions. Experimental results in coronary artery disease diagnostics confirm these results as our approach significantly improves the clinical results in terms of quality of image parameters as well as diagnostic performance.

Paper 169: Interpreting Face Images by Fitting a Fast Illumination-Based 3D Active Appearance Model

Author(s): Salvador Ayala-Raggi, Leopoldo Altamirano-Robles, Janeth Cruz-Enriquez

We present a fast and robust iterative method for interpreting face images under non-uniform lighting conditions by using a fitting algorithm which utilizes an illumination-based 3D active appearance model in order to fit a face model to an input face image. Our method is based on improving the Jacobian each iteration using the parameters of lighting that have been estimated in preceding iterations. In the training stage, we precalculate a set of synthetic face images of basis reflectances and albedo generated from displacing one at the time, each one of the model parameters, and subsequently, in the fitting stage, we use all these images in combination with lighting parameters for assembling a Jacobian matrix adapted to the illumination estimated in the last iteration. In contrast to other works where an initial pose is required to begin the fit, our approach only uses a simple initialization in translation and scale. At the end of the fitting process, our algorithm obtains a compact set of parameters of albedo, 3D shape, 3D pose and illumination which describe the appearance of the input face image.

Paper 174: EEG Data Driven Animation and its Application

Author(s): Olga Sourina, Alexei Sourin, Vladimir Kulish

Human electroencephalograph (EEG) data driven animation is often used in neurofeedback systems for concentration training in children and adults. Visualization of the time-series data could be used in neurofeedback and for the data analysis. The paper proposes a novel method of 3D mapping of EEG data and describes visualization system VisBrain that was developed for EEG data analysis. We employed a concept of a dynamic 3D volumetric shape for showing how the electrical signal changes through time. For the shape, a time- dependent solid blobby object was used. This object is defined using implicit functions. Besides just a visual comparison, we propose to apply set-theoretic ("Boolean") operations to the moving shapes to isolate activities common for both of them per time point, as well as those that are unique for either one. The advantages of the method are demonstrated with real EEG experiments examples. New emerging applications of EEG data driven animation in e- learning, games, entertainment, and medical applications are discussed.

Paper 177: Facade Structure Parameterization Based on Similarity Detection from Single Image

Author(s): Hong-Ping Yan, Chun Liu, André Gagalowicz, Cedric Guiard

In this paper, we reverse engineer facade design from single rectified image of existing building facade by the use of similarity and hierarchy features of man-made objects. The inferred design is encoded into parametric grammar rules, named as ArchSys, which draw a compact and semantically meaningful characterization of the building structure and can be considered to support the design of other architectures. Combining with Gradient-based Mutual Information measure, we propose a rough-fine template-based similarity detection method to extract the structure patterns in a hierarchical way, which reduces computation time while increases robustness of the whole system. Our approach can be applied to various architectural typologies to detect not only symmetrical features but also similar patterns in one facade image. A feedback loop is built to refine the facade structure analysis and rule sets' parameters. Experimental results illustrate that our method is of robustness and general applications.

Paper 178: Epipolar Angular Factorisation of Essential Matrix for Camera Pose Calibration

Author(s): Wadyslaw Skarbek, Michal Tomaszewski

A novel epipolar angular representation for camera pose is introduced. It leads to a factorisation of the pose rotation matrix into three canonical rotations around the dual epipole for the second camera, around the z axis, and around the dual epipole for the rst camera. If the rotation around the z axis is increased by 90 and followed by the orthogonal projection on xy plane then the factorisation of essential matrix is produced. The proposed ve parameter representation of the essential matrix is minimal. It exhibits the fast convergence in LMM optimization algorithm used for camera pose calibration. In such parametrisation the constraints based on the distance to the epipolar plane appeared slightly more accurate than constraints based on the distance to the epipolar line.

Paper 182: Integrated Noise Modeling for Image Sensor Using Bayer Domain Images

Author(s): Yeul-Min Baek, Joong-Geun Kim, Dong-Chan Cho, Jin-Aeon Lee, Whoi-Yul Kim

Most of image processing algorithms assume that an image has an additive white Gaussian noise (AWGN). However, since the real noise is not AWGN, such algorithms are not effective with real images acquired by image sensors for digital camera. In this paper, we present an integrated noise model for image sensors that can handle shot noise, dark-current noise and fixed-pattern noise together. In addition, unlike most noise modeling methods, parameters for the model do not need to be re-configured depending on input images once it is made. Thus the proposed noise model is best suitable for various imaging devices. We introduce two applications of our noise model: edge detection and noise reduction in image sensors. The experimental results show how effective our noise model is for both applications.

Paper 183: Searching High-Dimensional Neighbours: CPU-based Tailored Data-Structures versus GPU-based Brute-force Method

Author(s): Vincent Garcia, Frank Nielsen

Many image processing algorithms rely on nearest neighbor (NN) or on the k nearest neighbor (kNN) search problem. Several methods have been proposed to reduce the computation time, for instance using space partitionning. However, these methods are very slow in high dimensional space. In this paper, we propose a fast implementation of the brute-force algorithm using GPU (Graphics Processing Units) programming. We show that our implementation is up to 150 times faster than the classical approaches on synthetic data, and up to 75 times faster on real image processing algorithms (finding similar patches in images and texture synthesis).


This software generating these pages is © (not Ghent University), 2002-2024. All rights reserved.

The data on this page is © Mirage 2009. All rights reserved.

The server hosting this website is owned by the department of Telecommunications and Information Processing (TELIN) of Ghent University.

Problems with the website should be reported to .

"

This page was generated on Friday April 26th, 2024 04:52:55.