Human-Computer Interaction

We can train a social robot via reinforcement learning so as to enhance user engagement by generating appropriate smiles, laughs and nodes during conversation.

Sketch-Based Articulated 3D Shape Retrieval

The authors developed an articulated 3D shape retrieval method that uses easy-toobtain 2D sketches. It does not require 3D example models to initiate queries but achieves accuracy comparable to a state-of-the-art example-based 3D shape retrieval method.

Can  Systems Having Human-like Perception be Developed?

T. Metin Sezgin introduces IUI Lab which aims humans and computers to communicate naturally. Generally, projects focus on interaction with robots, optimization of search engines, education and enhancements in workplace.

Audio-Facial Laughter Detection
in Naturalistic Dyadic Conversations

Our experiments show that our multimodal
approach supported by bagging compares favorably to the state of the art in presence of detrimental factors such as cross-talk,
environmental noise, and data imbalance.

Learn2Dance

We learn music-to-dance mappings to generate plausible music-driven dance choreographies.

iAutoMotion – an Autonomous
Content-based Video Retrieval Engine

This paper introduces iAutoMotion, an autonomous video
retrieval system that requires only minimal user input. It is based on the
video retrieval engine IMOTION.

Gaze-Based Biometric Authentication: Hand-Eye Coordination
Patterns as a Biometric Trait

We propose a biometric authentication system for pointer-based systems including, but not limited to, increasingly prominent
pen-based mobile devices

IMOTION – Searching for Video Sequences
Using Multi-Shot Sketch Queries

This paper presents the second version of the IMOTION  system, a sketch-based video retrieval engine supporting multiple query
paradigms. For the second version, the functionality and the usability of the system have been improved.

Semantic Sketch-Based Video
Retrieval with Autocompletion

The system indexes collection data with over 30 visual features describing color,
edge, motion, and semantic information. Resulting feature data is stored in ADAM, an efficient database system
optimized for fast retrieval.

In this paper (1) an existing activity prediction system for pen-based devices is modified for real-time activity prediction and (2) an alternative time-based activity prediction system is introduced. Both systems use eye gaze movements that naturally accompany pen-based user interaction for activity classification.

This paper presents a data collection of social interaction dialogs involving humor between a human participant and a robot. In this work, interaction scenarios have been designed in order to study social markers such as laughter.

Video Super-Resolution Project

Video Super-Resolution aims to give a satisfying estimation of a high resolution image from multiple similar low resolution images.