Acivs 2015

Advanced Concepts for Intelligent Vision Systems

Museo Diocesano Acivs logo

Oct. 26-29, 2015

Museo Diocesano, Catania, Italy

LNCS

Acivs 2015 Abstracts

This page is regenerated automatically every 60 minutes.

Invited papers

Paper 201: Solidarity Filter for Noise Reduction of 3D Edges in Depth Images

Author(s): Hani Javan Hemmat, Egor Bondarev, Peter de With

3D applications processing depth images significantly benefit from 3D-edge extraction techniques. Intrinsic sensor noise in depth images is largely inherited to the extracted 3D edges. Conventional denoising algorithms remove some of this noise, but also weaken narrow edges, amplify noisy pixels and introduce false edges. We therefore propose a novel solidarity filter for noise removal in 3D edge images without artefacts such as false edges. The proposed filter is defining neighbouring pixels with similar properties and connecting those into larger segments beyond the size of a conventional filter aperture. The experimental results show that the solidarity filter outperforms the median and morphological close filters with 42% and 69% higher PSNR, respectively. In terms of the mean SSIM metric, the solidarity filter provides results that are 11% and 21% closer to the ground truth than the corresponding results obtained by the median and close filters, respectively.

Paper 231: Quasar - Unlocking the Power of Heterogeneous Hardware

Author(s): Jonas De Vylder, Bart Goossens

Computationally and data intensive applications, such as video processing algorithms, are traditionally developed in programming languages such as C/C++. In order to cope with the more demanding requirements (e.g., real- time processing of large datasets), hardware accelerators such as GPUs have emerged to aid multi-core CPUs for computationally intensive tasks. Because these accelerators offer performance improvements for many (but often not all) operations, the programmer needs to decide which parts of the code are best to be developed for the accelerator or the CPU.

Development for heterogeneous devices comes at a cost: 1) the sophisticated programming and debugging techniques lead to a steep learning curve, 2) development and optimization often requires huge efforts and time from the programmer, 3) often different versions of the code for different target platforms need to be written, 4) the resulting code may not be future-proof: it is not guaranteed to work optimally on future devices.

In this talk we present a new programming language, Quasar, which mitigates these common drawbacks. Quasar is an easy-to-learn, high-level programming language that is hardware-independent, ideal for both rapid prototyping and full deployment on heterogeneous hardware.

Paper 232: Joint Optical Designing: Enhancing Optical Design by Image Processing Consideration

Author(s): J. Rollin, F. Diaz, MA. Burcklen, E. Mujic, C. Fontaine

The recent surge in European projects (Panorama, Copcams, Exist, Image Capture of the Future....) illustrates not only a high level of research activity in this area but also a strong interest from industry. Computer vision and video processing have entered a new era offering large data rates, ever increasing spatial and temporal resolution and multiple imaging modalities.

As a consequence, this also opens new areas of optimal optical solutions. This enables new multispectral imaging systems as well as enhanced performances linked to combined optimization of optical designs (if necessary including wavefront coding optical components) while maintaining low power, compact and real time image processing.

The talk will illustrate how these new improved image processing are opening a new era of joint optical design combining optimally optical design and real time image processing.

Paper 233: Computer Vision Applications and Their Industrial Exploitation

Author(s): Alessandro Capra

During the talk the main business opportunities in the field of the computer vision for the semiconductor industry will be outlined. Market analysis, R&D trends, time to market, customer requirements, algorithm complexity and architectural implementations are the key aspects to be analysed for a product development. A few potential R&D activities like people detection and feature extraction will be presented and their evolution towards a product will be outlined. A short overview on R&D evolution will be outlined.

Paper 234: Smart Image Sensor for Advanced Use and New Applications

Author(s): Michael Tchagaspanian

Multimedia applications such as video compression, image processing, face recognition, run now on embedded platforms. The huge computing power needed is provided by the evolution of the transistor density and by using specialized accelerators. These accelerators are supported by multimedia instructions set.

Using these complex instructions can be a nightmare for the engineer because there are many ways to program it, quality of the compiler support can be random depending on the couple compiler/platform and worse, performances can be data dependent. Using libraries can be an option if such libraries exist and provide enough performances.

In this talk, I’ll illustrate the difficulty to generate binary code for this application domain by practical example of code generation. Then I’ll show a tool deGoal which is developed in house to resolve these problems.

Paper 236: An Adaptive Framework for Imaging Systems

Author(s): Andreas Erik Hindborg, Lars Frydendal Bonnichsen, Nicklas Bo Jensen, Laust Brock-Nannestad, Christian W Probst, and Sven Karlsson

Computer vision and video processing systems handle large amounts of data with varying spatial and temporal resolution and multiple imaging modalities. The current best practice is to design video processing systems with an overcapacity, which avoids underperforming in the general case, but wastes resources. In this work we present an adaptive framework for imaging systems that aims at minimizing waste of resources. Depending on properties of the processed images, the system dynamically adapts both the implementation of the processing system and properties of the underlying hardware.

Paper 237: Binary Code Generation for Multimedia Application on Embedded Platforms

Author(s): Henri-Pierre Charles

Multimedia applications such as video compression, image processing, face recognition, run now on embedded platforms. The huge computing power needed is provided by the evolution of the transistor density and by using specialized accelerators. Theses accelerators are supported by multimedia instructions set.

Using theses complex instructions can be a nightmare for the engineer because there is many way to program it, quality of the compiler support can be random depending on the couple compiler/platform and worse, performances can be data dependent. Using libraries can be an option if such library exist and provide enough performances.

In this talk I'll illustrate the difficulty to generate binary code for this application domain by practical example of code generation. Then I'll show a tool deGoal which is developed in house to resolve these problems.

Paper 238: Goals and Directions of the Newly Started EXIST Project

Author(s): Piet De Moor, Andy Lambrechts, Jonathan Borremans, and Barun Dutta

A proposal titled EXIST (‘Extended image sensing technologies’) was accepted during the first EC/ECSEL call of 2014. EXIST will investigate and develop innovative new technologies for image sensors needed in the next plus one (N+2) generation of several application domains. The image sensor research will focus on enhancing and extending the capabilities of current CMOS imaging devices. The EXIST consortium will develop innovative new technologies for image sensors:

- New design (architectures) and process technology (e.g. 3D stacking) for better pixels (lower noise, higher dynamic range, higher quantum efficiency, new functionality in the pixel) and more pixels at higher speed (higher spatial and temporal resolutions, higher bit depth), time-of-flight pixels, local (on-chip) processing, embedded CCD in CMOS Time delayed integration.

- Extended sensitivity and functionality of the pixels: extension into infrared, filters for hyperspectral and multispectral imaging, better colour filters for a wider colour gamut, and FabryPérot Interference cells.

- Increasing the optical, analog and data imaging pipelines to enable high frame rates, better memory management, etc.

Together with sensor related processing these image sensor and filter designs will be demonstrated in 9 different demonstrators in the following application domains: Security, Healthcare, Digital Lifestyle and Agriculture.

Paper 239: Image features for illuminant estimation and correction

Author(s): Raimondo Schettini

Many computer vision applications for both still images and videos can make use of Illuminant estimation and correction algorithms as a pre-processing step to make sure that the recorded color of the objects in the scene does not change under different illumination conditions. It can be shown that illuminant estimation is an ill-posed problem, its solution lacks therefore of uniqueness and stability. To cope with this problem, common solutions usually exploit some heuristic assumptions about the statistical properties of the expected illuminants and/or of the reflectance of the objects in the scene. In this keynote I briefly review state-of-the-art methods and illustrate promising researches aimed to improve single and multiple illuminants estimation by using features automatically extracted from the image.

Paper 240: Domain Adaptation for Visual Applications

Author(s): Gabriela Csurka

Machine learning applications rely in general on a large amount of hand labelled examples. However labelling is expensive and time consuming due to the significant amount of human efforts involved. Domain adaptation addresses the problem of leveraging labelled data in one or more related domains, often referred as source domains, when learning a classifier for unseen data in a target domain. Adaptation across domains is a challenging task for many real applications including NLP tasks, spam filtering, speech recognition and various visual applications. In this talk after a brief overview of different types of domain adaptation methods, I will focus mainly on a several visual scenarios and give a more detailed view of a few recent methods.

Paper 241: The ICAF Project : Image CApture of the Future

Author(s): Ljubomir Jovanov, Jochem Herrmann, Klaas Jan Damstra, Maarten Kuijk, Wilfried Philips, Hiep Luong, Bram Devuyst, Willem-Jan Dirks, Jean-Baptiste Lorent, Pieter Jonker, Philippe Bekaert and Roel Aerts

One of the primary objectives of the ICAF project was to achieve major advancements in image capture technology and systems to further increase automation in high-added-value production processes, state-of-the-art security systems as well as the traffic and automotive domains. Moreover, it has provided enhancements to quality of life by offering creative industries higher resolution image capture technologies and Video over Internet Protocol. ICAF has provided faster and more sensitive image sensors to allow the next generation of equipment to achieve higher accuracies and speeds. Next, ICAF has also delivered the integrated circuits for the next generation of the CoaXPress interface standard. Because machine vision applications increasingly make use of 3D, research has been made on image sensor and processing architectures for applications such as automated optical inspection in electronics manufacturing. Within ICAF, we have also performed research on 3D algorithms for entertainment and media production. The broadcast market has moved from standard definition television to 1080 line interlaced high definition television with a picture. The project has developed and demonstrated technology that achieves three times the frame rates of today, with the same picture quality per frame, by using innovative noise reduction algorithms. Another important result in this respect is single lens 3D image capture at HD resolution and the algorithms for stereo view interpolation and depth map generation. Increased data rates require adequate data compression algorithms. In ICAF, the research has been conducted on mapping new compression codecs like MVC on FPGA, for real-time operation in 3D broadcast environment.


This software generating these pages is © (not Ghent University), 2002-2019. All rights reserved.

The data on this page is © Acivs 2015. All rights reserved.

The server hosting this website is owned by the department of Telecommunications and Information Processing (TELIN) of Ghent University.

Problems with the website should be reported to .

This page was generated on Monday September 16th, 2019 05:24:37.