In the last decade, consumer imaging devices such as camcorders, digital cameras, smartphones and tablets have been dramatically diffused. The increasing of their computational performances combined with an higher storage capability allowed to design and implement advanced imaging systems that can automatically process visual data with the purpose of understanding the content of the observed scenes.
In the next years, we will be conquered by wearable visual devices acquiring, streaming and logging video of our daily life. This new exciting imaging domain, in which the scene is observed from a first person point of view, poses new challenges to the research community, as well as gives the opportunity to build new applications. Many results in image processing and computer vision related to motion analysis, tracking, scene and object recognition and video summarization, have to be re-defined and re-designed by considering the emerging wearable imaging domain.
In the first part of this course we will review the main algorithms involved in the single-sensor imaging devices pipeline describing also some advanced applications. In the second part of the course we will give an overview of the recent trends of imaging devices considering the wearable domain. Challenges and applications will be discussed considering the state-of-the-art literature.