Acivs 2011

Advanced Concepts for Intelligent Vision Systems

Ghent Acivs logo

Aug. 22-25 2011

Het Pand, Ghent, Belgium


Acivs 2011 Invited Speakers' Biographies

Marc De Mey

UGent and Koninklijke Vlaamse Academie van België

Marc De Mey is currently director of VLAC (Vlaams Academisch Centrum) the IAS (Institute for Advanced Study) of the Royal Flemish Academy of Belgium (KVAB) and has been re-erlected for this honorary task for the period 2011-2014. He has a background in psychology and philosophy of science and is the author of The Cognitive Paradigm (3th ed. Chicago University Press, 1992), also available in Japanese. Until his retirement at Ghent University in 2005 he has been professor of cognitive science in the Faculty of Letters. His teaching duties for art historians included a course on Historical and Current Theories on Visual Perception. He has been a Fulbright scholar at the Harvard Center for Cognitive Studies and Peter Paul Rubens professor at UC Berkeley. A pursuit of the Piagetian approach in the study of perspective resulted in the exploration of advanced optics in the illuminations and paintings of Jan Van Eyck and the Flemish Primitives.

Presentation: Optical Issues in the Paintings of Jan Van Eyck

The major decade in the artistic life of Jan Van Eyck (+/-1390-1441) was the decade in which linear perspective was codified as the backbone of the art of painting by Leon Battista Alberti in De pictura (1435). According to the archival research of art historian Hugo van der Velden of Harvard University, the completely assembled Ghent Altarpiece, by far the largest work of the Van Eyck's, came about in the very same year (and not in 1432 as currently indicated on the frame). Art historians have valued Jan Van Eyck for what seems painstaking realism in depicting materials but they have deplored his failure to adopt the strict rules of linear perspective. In the talk Marc De Mey will highlight some of the clever tricks and ingenious devices of Jan Van Eyck which indicate advanced optical understanding complementary and even superior to Albertian perspective. The talk will be illustrated with materials and macro photographs in a powerpoint presentation based on the digitized Dierick collection of high quality photographic negatives made available to Ghent University by the family of the late father Alfons Dierick.

Peter Meijer

Peter Meijer received his MSc in Physics from Delft University of Technology in 1985 and his PhD in Electronics from Eindhoven University of Technology in 1996. He worked over twenty years at Philips Research (1985-2006) and four years at NXP Semiconductors (2006-2010). In parallel with his work in the electronics industry he developed an image-to-sound conversion system known as "The vOICe", aimed at a form of artificial vision for the blind. Since this year he is founder and director of Metamodal BV.

Presentation: Camera-based sensory substitution and augmented reality for the blind

Rapid developments in mobile computing and sensing with smartphones open up new opportunities for augmenting our reality with information and experiences that our senses could not directly provide. One current trend is towards augmented reality applications based on location-based services (LBS) and computer vision. Apart from mass-market uses, there also arise new uses in niche markets such as technology for the blind. Despite its more limited commercial value, I will in my talk discuss how this particular niche market is extremely interesting for bringing together research on man-machine interfaces, computer vision, brain plasticity, synesthesia, and even contemporary philosophy. It is also an area where fundamental research (e.g. on brain plasticity) may prove directly socially relevant through applications that are readily made globally available over the web, and that run on mass-market devices. Hybrid applications convey via sound or touch the raw visual information from live camera views as well as semantic information for nearby items of interest, as recognized through computer vision or identified through location databases. Moreover, neuroscience research has in the past decade established that the visual cortex of blind people becomes responsive to sound and touch, thus adding some biological plausibility to the idea of creating non-invasive sensory by-passes in the form of sensory substitution.

Ben Kröse

University of Amsterdam

Ben Kröse received his Ph.D. at Delft University of Technology. He worked for two years at California Institute of Technology on models of human vision and algorithms for computer vision. He moved to the University of Amsterdam to start a group on neuro-informatics and robotics. Currently he is professor Ambient Robotics at the University of Amsterdam. His research focuses on interactive smart devices, which is expected to be widely applied for smart services in health, safety, wellbeing, security and comfort. He is scientific manager of "Create-IT," a research centre for IT and the creative industry at the Amsterdam University of Applied Science. In the fields of intelligence and autonomous systems he published 33 papers in scientific journals, edited 5 books and special issues and more than 100 conference papers. He owns a patent on multicamera surveillance. He is member of IEEE, Dutch Pattern Recognition Association and Dutch AI Association

Presentation: Distributed Smart Cameras for Health and Wellbeing

With the increasing elderly population there is a growing interest in systems that are able to monitor the activities of elderly and to use the information for coaching or for alarming. In this way people are able to live independently in their homes for a longer period of time. I will give an overview of the types of activities that are relevant for monitoring, and how cameras can be used for these applications. In this context I will present our work on fall detection with different camera systems. Apart from alarm functions, cameras are also used for therapies and gaming. I will present some of our work in this field. Finally I will present some of our work that studies privacy issues related to camera monitoring.

Lambert Spaanenburg

Lund University

Lambert Spaanenburg has a long history in academic and industrial research on vision. At the start of the VLSI era he worked on hardware architectures, such as GPUs and image renderers. Later his attention moved to neural vision engineering, notably for assisted and toll driving with respectively Daimler and Dacolian. Lately wireless applications came into focus where mixed media and augmented range move the frontiers of mobile, real-time services in behavioral modeling and low-light vision. Of late his research has moved to topics in distributed intelligence in vision networks. During this journey Lambert has communicated through around 300 reviewed papers and book chapters. He has recently co-authored his vision on safe and secure web-based services in the Springer/Kluwer book "Cloud Connectivity and Embedded Sensory Systems." Currently he can be contacted through RaviteQ, Nocturnal Vision and Base, all in Sweden.

Presentation: The tale of 1000 cameras

More and more cameras are appearing in public areas. Such cameras are mostly collecting images for later inspection and provide little more. The presentation discusses what keeps us from exploiting them in more intelligent ways for collaboration. We will look within cameras with multiple vision sensors, and within intelligent mobile networks. From there, we will outline the 1000 camera project for quality control on factory lines.

Rainer Stiefelhagen

Karlsruhe Institute of Technology

Dr. Rainer Stiefelhagen is a professor at the Karlsruhe Institute of Technology, where he is directing the research field on "Computer Vision for Human-Computer Interaction." He is also head of the research field "Perceptual User Interfaces" at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation (IOSB) in Karlsruhe.

His research focuses on the development of novel techniques for the visual perception of humans and their activities, in order to facilitate perceptive multimodal interfaces, humanoid robots and smart environments.

In 2007, Dr. Stiefelhagen was awarded one of the few German Attract projects in the area of Computer Science funded by the Fraunhofer Foundation. Within this project, his aim is to to build an attentive smart control room for crisis applications. Dr. Stiefelhagen has been one of the scientific coordinators of the European Integrated Project CHIL (Computers in the Human Interaction Loop). He is a member of in the German government sponsored collaborative research cluster "Humanoid Robots - Learning and Cooperative Multimodal Robots," where he is directing the work on visual perception of humans and their activities. He is also a member of the Franco-German Quaero project, where he is working on the retrieval of persons in multimedia content.

His work has been published in more than one hundred publications in journals and conferences. He has been a program chair of the International Conference on Automatic Face and Gesture Recognition 2011 and is a standing committee member of the International conference on Multimodal Interfaces and the International Workshop for Machine Learning for Multimodal Interaction. He is a member of the editorial board of the Springer Journal on Multimodal Interaction.

In 2008 he co-founded Videmo Intelligente Videoanalyse GmbH & Co. KG, Karlsruhe. Videmo offers solutions for intelligent video analysis, with a focus on applications related to security and customer monitoring. Dr. Stiefelhagen received his Doctoral Degree in Engineering Sciences in 2002 from the Universität Karlsruhe (TH).

Presentation: Computers Seeing Humans --- Vision-based Perception of Humans for Smart Environments and Other Applications

Vision-based perception of humans has a wide range of applications ranging from building human-friendly technical systems such as human-friendly robots or smart environments to applications in surveillance and image retrieval. In this talk I will present some of our recent efforts towards building such systems. In particular I will talk about an ongoing smart control room project, where our aim is to build an attentive smart room to support crisis control work. In this room, real-time perception of people is used to enable personalized workspace that follow people in the room, and to allow gesture and gaze based interaction with large displays and across devices in the room. I will also talk about some ongoing efforts in person identification and retrieval in multimedia data and camera networks. Finally, I will mention some commercial uses cases of such technology such as video-based customer monitoring, including age and gender recognition.

This software generating these pages is © (not Ghent University), 2002-2024. All rights reserved.

The data on this page is © Acivs 2011. All rights reserved.

The server hosting this website is owned by the department of Telecommunications and Information Processing (TELIN) of Ghent University.

Problems with the website should be reported to .


This page was generated on Wednesday June 19th, 2024 02:34:50.