.: SIBGRAPI 2013 - Tutorials :.

Citation format

@inproceedings{key,
author = {...},
title = {...},
year = {2013},
month = {august},
booktitle = {SIBGRAPI 2013 (XXVI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T))},
editor = {Carlos Hitoshi Morimoto, CÚsar Beltrßn Casta˝ˇn},
address = {Arequipa, Peru},
url = {http://www.ucsp.edu.pe/sibgrapi2013/eproceedings/}
}

Survey Papers

T1 - Getting Started with Videogame Development (handouts)

Junior Rojas
Pontifical Catholic University of Peru

Abstract: "This paper provides information about various aspects involved in game development. It describes some general concepts, with emphasis on game engine architecture, as well as specific technologies related to the described concepts. Covered topics include game graphics rendering, collision detection, physics, artificial intelligence, scripting and an introduction to the Unity game engine."

T2 - Visual immersion issues in Virtual Reality (handouts)

Guillaume Moreau
Ecole Centrale de Nantes, France

Abstract: "Thanks to immersion and interaction, Virtual Reality (VR) offers a shift of paradigm compared to traditional computer graphics or simulation software. Most VR application include a visual rendering part. However, efficient and relevant visual interfacing of a human user (who can be a designer, the final user of a future product, a trainee or the subject of an experiment) raises issues about visual interfaces and depth perception in computer generated images. In this tutorial we propose methodological aspects for the design of the visual part of VR applications. We particularly focus on the commonly used stereoscopic vision by studying its constraints and showing how an efficient stereoscopic application should be designed. Expected outcome for attendees: basic knowledge about the human visual system and its main features with respect to Virtual Reality, knowledge about devices for visual interfaces, know how to choose a visual interface, know about stereo constraints. 3D or VR developers should also be able to program correct stereo applications. Experience of bad stereo conditions and limits might be provided in a practical session if adequate hardware is available."

T3 - Remote Eye Tracking Systems: Technologies and Applications (handouts)

Fabricio Narcizo, Jose Eustaquio de Queiroz, Herman Gomes
Universidade Federal de Campina Grande

Abstract: "Eye tracking is an active multidisciplinary research field, which has shown great progress in the last decades. Eye tracking is the process of monitoring eye movements in order to determine the point of gaze or to analyze motion patterns of an eye relative to the head or the environment. This process has several applications in many research fields such as Medicine, Psychology, Human-Computer Interaction, Marketing, Advertising, Digital Journalism, among others. Moreover, eye tracking provides essential information to other technological processes such as face detection, usability testing, biometric identification, human behavior studies and human-computer interaction tasks. In the field of eye tracking, the remote term is used for classifying an eye tracker in which its components do not have any physical contact with the users body. In general, eye tracking systems that present high accuracy rates in the process for estimating the point of gaze are very expensive. However, nowadays it is possible to develop remote eye tracking systems with good cost-benefit (i.e., through the use of the off-the-shelf hardware components and open-source libraries). Within this context, this tutorial aims to present an overview of the technologies and applications related to eye tracking. We will explain hardware architectures was well as the underlying computer vision processes employed to the analysis of the user's ocular parameters captured by a remote eye tracker. This tutorial addresses researchers, students, professors and professionals who work in the field of Computer Graphics, Digital Image Processing, Computer Vision and Pattern Recognition, who wish to learn the fundamentals of the field of eye tracking. The participants will be presented to state-of-the-art work in eye tracking technologies and applications. Moreover, sufficient practical information for building successful remote eye tracking systems will also be provided in this tutorial. This tutorial is organized in 6 parts, as described next. (A) Contextual introduction (30 minutes). We begin with a brief review on the field of eye tracking research. Then, we briefly discuss the use of eye tracking in the field of Computer Science, and its relations with the fields of Computer Vision, Pattern Recognition and Digital Image Processing. (B) Human visual system (30 minutes). A general presentation of the human visual system (HVS) for identifying how the eye tracking performs the monitoring of the user's visual attention and the user's eye movements through the analysis of his/her ocular parameters. (C) Remote eye tracker (2 hours). Presentation of the features of the physical components of a remote eye tracker and necessary information to the installation of a low-cost and off-the-shelf hardware components. (D) Eye tracking methods (2 hours). Presentation of the main computer-vision-based eye tracking methods and the development process of eye tracking systems. (E) Final considerations (15 minutes) In this part, conclusions of the eye tracking concepts presented in this tutorial are discussed. (F) Practice demonstration (45 minutes). At the end of the tutorial, it will be reserved 45 minutes for demonstrating a low-cost remote eye tracker developed by the presenters."

T4 - Spectral geometry methods in shape analysis (handouts)

Michael Bronstein
University of Lugano (USI), Switzerland

Abstract: "The purpose of this tutorial is to overview the foundations of shape analysis and to formulate state-of-the-art theoretical and computational methods for shape description based on their intrinsic geometric properties. The emerging field of spectral and diffusion geometry provides a generic framework for many methods in the analysis of geometric shapes and objects. The tutorial will present in a new light the problems of shape analysis based on diffusion geometric constructions such as manifold embeddings using the Laplace-Beltrami and heat operator, 3D feature detectors and descriptors, diffusion and commute-time metrics, functional correspondence, and spectral symmetry."

T5 - Tutorial on Shape Matching for 3D Retrieval and Recognition (handouts)

Ivan Sipiran, Benjamin Bustos
Universidad de Chile

Abstract: "Three-dimensional shape matching is a young research field concerning the content-based comparison of 3D models. Several applications have been proposed so far, remarking its potential as support in computer graphics and high-level computer vision tasks. Similarly, a lot of research has been done to tackle the shape matching problem. As a result, there is a large amount of approaches and techniques in the literature. This tutorial gives an overview of three-dimensional shape matching and its applications in retrieval and recognition. In addition, we emphasize the importance of shape matching in computer graphics and computer vision as witnessed in recent researches. Our goal is to cover the relevant aspects of the field, presenting part of the state of the art and showing the existing and potential applications. Also, demos will be used to illustrate concepts and techniques. This will allow readers to easily understand the background of shape matching."

T6 - Tessellation Shaders programming using GLSL (handouts)

Thiago Elias Gomes, Matheus Lessa Rodrigues, Rodrigo de Toledo
Universidade Federal de Rio de Janeiro

Abstract: "The goal of this tutorial is to teach a specific stage of the programmable graphics pipeline based on GLSL. Our purpose is to create a learning environment through Coding Dojos. To focus exclusively in Tessellation Shaders programming, we will use ShaderLabs as our Integrated Development Environment, which was developed by the same authors."

T7 - Particle Based Simulations Using GPUs (handouts)

Yalmar Ponce Atencio, Claudio Esperanša
Universidade Federal de Rio de Janeiro

Abstract: "Physically based simulation has been studied intensely over the last decades, especially the dynamics of solids, fluids and other deformable bodies. For the case of rigid solids, the classical methods are based on the application of forces or impulses in response to detected collisions. The simulation of deformable bodies require the use of mathematical models capable of approximating effects like stretching, shearing and bending. For efficient and fast simulations, position-based methods have become popular in the Computer Graphics community. These methods are fast, unconditionally stable and controllable, which makes them well-suited for the use in interactive environments like virtual reality, computer games and special effects in movies. This tutorial try to cover some position-based methods that were developed for the simulation of solids (rigid and deformable) and fluids. We will introduce the concept of position-based dynamics using particles, as well as several applied techniques proposed in recent years, such as simulation based on shape matching and the Smooth Particle Hydrodynamics (SPH) approach for liquids. Furthermore, we will delve in some detail into the implementation of these methods using GPUs, with the analysis and demonstration of actual code."

T8 - Remote Sensing Image Segmentation and Representation Through Multiscale Analysis (handouts)

Jefersson Alex dos Santos, Ricardo da Silva Torres
Universidade de Campinas

Abstract: "Every year, new sensor technologies are being implemented to improve the acquisition of high-resolution remote sensing images (RSIs). With the large amount of data provided by these sensors, novel computational approaches are constantly required to support decision-making process based on RSI analysis. A typical problem is the recognition of target regions for land cover mapping. In this context, the main problems are: (1) classification methods are dependent on the segmentation quality; and (2) the selection of representative samples for training is a costly process. The samples indicated by the user are not always enough to define the best segmentation scale. Furthermore, the indication of samples can be expensive, since it often requires to visit studied places in loco. The segmentation-dependence problem has been addressed in the literature by using multiscale analysis. The training sample selection problem is, in turn, addressed mainly by employing user interaction techniques which are usually combined with pixel-based classification approaches. This tutorial aims to introduce the problems, challenges, and some state-of-the-art approaches for multiscale classification of remote sensing image. The main covered topics are arranged into four sessions: introduction, segmentation, feature extraction, and classification. First, some background concepts are introduced as well as the main challenges of pattern recognition in remote sensing images. The second session of this tutorial will present the main concepts related to the creation of image representations based on multiscale segmentation. The third session will present proposed strategies for hierarchical feature extraction. Finally, the last session will present multiscale classification approaches. The integration between multiscale analysis and interactive classification is also a research venue addressed in this tutorial."