Description

Smart cameras combine video sensing, processing, and communication on a single embedded platform. Networks of smart cameras are real-time distributed embedded systems that perform computer vision using multiple cameras. This new approach has emerged thanks to a confluence of simultaneous advances in four key disciplines: computer vision, image sensors, embedded computing, and sensor networks. Recently these visual sensor networks have gained a lot of interest in research and industry; applications include surveillance, assisted living and smart environments.

This tutorial focuses on the “embedded aspects” of smart camera networks. Since these networks represent large, resource-constraint, distributed embedded systems, they may serve as challenging platforms for innovative embedded systems research. Visual data is processed in real-time using distributed sensing and computing nodes. Although this distribution of sensing and processing introduces several complications, we believe that the problems it solves are much more important than the challenges of designing and building a distributed smart camera network. As in many other applications, distributed systems scale much more effectively, require less network bandwidth and achieve shorter response times than do centralized systems.

The development of computer vision components (HW and SW) in smart cameras is a particularly challenging task. The nature of embedded devices limits the computational power available to the applications. The limitations of the memory and I/O systems are often more severe since they hinder the processing of large image data. Implementing and debugging on an embedded device pose several challenges –- particularly when computation is outsourced to a FPGA and/or DSP.

Aim of the Tutorial

The motivation for this tutorial is to bring together researchers and students working on the various fields related to smart camera networks and to introduce this topic to embedded systems people. This half-day tutorial provides a unique opportunity to get introduced to the state-of-the-art and open problems in smart camera networks and to get to know the work and the people conducting research in this field. This tutorial should leverage a fruitful exchange of ideas and stimulate future research among the smart camera and embedded systems communities.

Course Material

Please note that this is a preliminary version of the course material. It will be revised throughout the following weeks.

1. Introduction

  • Motivation and Overview
Part 1 (PDF)

2. Smart imager and smart cameras

Part 2 (PDF)

3. Embedded image processing

  • Heterogeneous Platforms (FPGAs, DSPs ...)
  • Dedicated Processors (GPU and cell)
Part 3 (PDF)

4. Visual Sensor Networks

  • Distributed Sensing and Processing
Part 4 (PDF)

5. Conclusion

  • Research Challenges
  • Applications
Part 5 (PDF)

Interesting Links

Speakers’ Biographies

Bernhard Rinner is a Full Professor and chair of pervasive computing at Klagenfurt University (Austria) where he is currently serving as Vice Dean of the Faculty of Technical Sciences.

He received both his PhD and MSc in Telematics from Graz University of Technology in 1996 and 1993, respectively. Before joining Klagenfurt he was with Graz University of Technology and held research positions at the Department of Computer Sciences at the University of Texas at Austin in 1995 and 1998/99. His current research interests include embedded computing, embedded video and computer vision, sensor networks and pervasive computing. He has authored and co-authored more than 100 papers for journals, conferences and workshops has led many research projects and has served as reviewer, program committee member, program chair and editor-in-chief.

Prof. Rinner has been co-founder and general chair of the ACM/IEEE International Conference on Distributed Smart Cameras and has served as chief editor of a special issue on this topic in The Proceedings of the IEEE. He is member of the IEEE, IFIP and TIV (Telematik Ingenieurverband).

François Berry received his Masters and Doctoral degrees in Electrical Engineering from the University of Blaise Pascal in 1994 and 1999 respectively. His PhD was on visual servoing and robotics and was undertaken at LASMEA in Clermont-Ferrand. Since September 1999, he is currently “Maitre de Conférences” (Associate Professor) at the University of Blaise Pascal and is member of the Perception System and Robotics group (within GRAVIR, LASMEA–CNRS). He is researching smart cameras, active vision, embedded vision systems and hardware/software co-design algorithms.

He is in charge of a Masters in Microelectronics and current teaches VHDL, Microelectronics, Hardware considerations for Signal Processing, Vision sensors and computer architecture. He has authored and co-authored more than 40 papers for journals, conferences and workshops. He has also led several research projects (Robea, ANR, Euripides) and has served as a reviewer and a program committee member.

Dominique Ginhac received his PhD in electronics and image processing from University Blaise Pascal, Clermont-Ferrand, France, in 1999.

At LASMEA UMR CNRS 6602, his PhD works focused on parallel architectures for real time image processing. In 2000, he became associate professor at the University of Burgundy, France and member of LE2I UMR CNRS 5158 ( Laboratory of Electronic, Computing and Imaging Sciences). Since 2009, he is full professor in LE2I. His main research topics are image acquisition and embedded image processing on CMOS VLSI chips.

Joel Falcou received his master’s and doctoral degrees in electrical engineering from University Blaise Pascal, Clermont-Ferrand in 2003 and 2006. He completed a PhD on tools and parallel architectures for real time image processing and computer vision at LASMEA in Clermont-Ferrand.

Since September 2008, he is an assistant professor at the University Paris-Sud and researcher at the Laboratoire de Recherche d'Informatique in Orsay. His work focus on investigating high-level programming models for parallel architecture (present and future) and to provide efficient implementation of such model using high-performance language features like generative programming in general and meta-programming in particular.