• About
    • Mission & History
    • Board of Directors
  • People
    • AI Committee
    • Affiliated Faculty
    • Students
    • Alumni
  • Research
    • Computer Vision
    • Computational Biology and Medicine
    • Human Computer Interaction
    • Machine Learning
    • Multimedia Signal Processing
    • Natural Language Processing
    • Robotics
    • Systems and AI
  • Projects
    • Blog
    • Sponsored Projects
    • Publications
  • Education
    • Courses and Educational Resources
    • Programs
  • Industry
    • Industrial Affiliates
    • How to engage
  • Resources
    • Data
    • Software
    • Hardware
  • Join Us
    • Graduate Admissions
    • Postdoc Positions
    • Faculty Positions
    • FAQ
  • About
  • Mission & History
  • Board of Directors
  • People
  • AI Committee
  • Affiliated Faculty
  • Students
  • Alumni
  • Research
  • Computer Vision
  • Computational Biology and Medicine
  • Human Computer Interaction
  • Machine Learning
  • Multimedia Signal Processing
  • Natural Language Processing
  • Robotics
  • Systems and AI
  • Projects
  • Blog
  • Sponsored Projects
  • Publications
  • Education
  • Courses and Educational Resources
  • Programs
  • Industry
  • Industrial Affiliates
  • How to engage
  • Resources
  • Data
  • Software
  • Hardware
  • Join Us
  • Graduate Admissions
  • Postdoc Positions
  • Faculty Positions
  • FAQ
Type the word you want to search for and press ENTER.
A-Z Content, Page Guide
ABC-ÇDEFGHI-İLMNPRS-ŞTU-Ü
A
AboutArabic-ALBERTA Prediction Framework for Fast Sparse Triangular SolvesAvailable Positions: Learning Rare Events in Autonomous DrivingA new single input multiple output deep temporal regression network (DTRN)AffectON: Incorporating Affect Into Dialog GenerationAll NewsAll EventsAll AnnouncementsArtificial IntelligenceAI FellowshipsAI CommitteeAffiliated FacultyAll BlogsApplicationAlumniAI Meetings
B
BeyondMoore: Pioneering a New Path in Parallel Programming Beyond Moore’s LawBoard of Directors
C-Ç
Courses and Educational ResourcesCan Learned Frame-Prediction Compete with Block-Motion Compensation for Video Coding?College of Engineering Outstanding Faculty AwardCyberphysical Blockchain-Enabled Peer-to-Peer Energy TradingCRAFT: A Benchmark for Causal Reasoning About Forces and InTeractionsCourses offeredComputer VisionComputational Biology and Medicine
D
DataDiagnostic Tools for Communication Pathologies in Parallel ArchitecturesDeep Learning for Image/Video Restoration and CompressionData AnalyticsDigital Signal ProcessingDL for Image/Video Nonlinear Signal ProcessingDL for BioinformaticsDisplaying Realistic Haptic Feedback on Touch Surfaces Using Machine/Deep LearningDistributed Systems Intelligence
E
EducationEmotion Dependent Domain Adaptation for Speech Driven Affective Facial Feature SynthesisEstimation of human force in physical human-robot interaction (pHRI) via machine/deep learningEducational Resources
F
Frequently Asked QuestionsFaculty Positions
G
Graduate ProgramsGraduate AdmissionsGrounded Language Understanding
H
HardwareHow to engageHuman Computer Interaction
I-İ
Industrial AffiliatesIndustryIntention Detection for Physical Human-Robot Interaction Using Machine/Deep Learning
L
Latest Computational Biology and Medicine PublicationsLatest Computer Vision PublicationsLatest Human-Computer Interaction PublicationsLatest Machine Learning PublicationsLatest Multimedia Signal Processing PublicationsLatest Natural Language Processing PublicationsLatest Robotics PublicationsLatest Systems and AI PublicationsLatest Multimodal Signal Processing PublicationsLatest Human Computer Interaction PublicationsLatest Systems & AI Publications
M
Mission & HistoryMultimedia Signal ProcessingMachine LearningMO-RL of SC for ManipulatorsMachine Learning for Multimodal and Intelligent User Interfaces
N
NewsNatural Language ProcessingNonequilibrium Physics and Machine Learning
P
ProgramsPostdoctoral ResearcherProfessional ProgramsProjectsPublicationsPositionsPostdoc PositionsPeoplePast ProjectsPast AI Meetings
R
ResourcesResearchReward Learning From Very Few DemonstrationsRoboticsResearch Highlights
S-Ş
SoftwareSparCity: Optimization and Co-design Framework for Sparse ComputationState-of-the-art Techniques for Deep Edge IntelligenceSample PageSupport LettersSystems and AISummer InternshipsStudents
T
The Road Less Travelled: One-Shot Learning of Rare Events in Autonomous Driving
U-Ü
Undergraduate Tracks
2
20202019201820172016
3
3D Perception for Creativity Assistance
HomeLatest Human Computer Interaction Publications
Latest Human Computer Interaction Publications
  • Publications
    • Computer Vision
    • Computational Biology & Medicine
    • Human Computer Interaction
    • Machine Learning
    • Multimedia Signal Processing
    • Natural Language Processing
    • Robotics
    • Systems And AI
  • Step-change in friction under electrovibration IEEE Transactions on Haptics
    I Ozdamar,MR Alipour,BP DelhayeHide
    2020

    More

    Abstract

    Rendering tactile effects on a touch screen via electrovibration has many potential applications. However, our knowledge on tactile perception of change in friction and the underlying contact mechanics are both very limited. We investigate the tactile perception and the contact mechanics for a step change in friction under electrovibration during a relative sliding between a finger and the surface of a capacitive touch screen. First, we conduct magnitude estimation experiments to investigate the role of normal force and sliding ...

    View details for https://ieeexplore.ieee.org/abstract/document/8960478/

  • Modeling Sliding Friction between Human Finger and Touchscreen Under Electroadhesion IEEE Transactions on Haptics
    C Basdogan,MA Sormoli,O SirinHide
    2020

    More

    Abstract

    When an alternating voltage is applied to the conductive layer of a capacitive touchscreen, an oscillating electroadhesive force (also known as electrovibration) is generated between the human finger and its surface in the normal direction. This electroadhesive force causes an increase in friction between the sliding finger and the touchscreen. Although the practical implementation of this technology is quite straightforward, the physics behind voltage-induced electroadhesion and the resulting contact interactions between human finger and ...

    View details for https://ieeexplore.ieee.org/abstract/document/9072660/

  • A Review of Surface Haptics: Enabling Tactile Effects on Touch Surfaces IEEE Transactions on Haptics
    C Basdogan,F Giraud,V LevesqueHide
    2020

    More

    Abstract

    We review the current technology underlying surface haptics that converts passive touch surfaces to active ones (machine haptics), our perception of tactile stimuli displayed through touch surfaces (human haptics), their potential applications (human-machine interaction), and finally the challenges ahead of us in making them available through commercial systems. This review primarily covers the tactile interactions of human fingers or hands with surface-haptics displays by focusing on the three most popular actuation methods ...

    View details for https://ieeexplore.ieee.org/abstract/document/9079589/

  • Kart-ON: Affordable Early Programming Education with Shared Smartphones and Easy-to-Find Materials 25th International Conference on Intelligent User Interfaces Companion
    A Sabuncuoğlu,M SezginHide
    2020

    More

    Abstract

    Programming education has become an integral part of the primary school curriculum. However, most programming practices rely heavily on computers and electronics which causes inequalities across contexts with different socioeconomic levels. This demo introduces a new and convenient way of using tangibles for coding in classrooms. Our programming environment, Kart-ON, is designed as an affordable means to increase collaboration among students and decrease dependency on screen-based interfaces. Kart ...

    View details for https://dl.acm.org/doi/abs/10.1145/3379336.3381472

  • Generation of 3D human models and animations using simple sketches
    A Akman,Y Sahillioglu,TM SezginHide
    2020

    More

    Abstract

    Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the ...

    View details for https://openreview.net/forum?id=ozFu9KivuQ

  • Data-driven vibrotactile rendering of digital buttons on touchscreens International Journal of Human-Computer Studies
    B Sadia,SE Emgin,TM Sezgin,C BasdoganHide
    2020

    More

    Abstract

    Interaction with physical buttons is an essential part of our daily routine. We use buttons daily to turn lights on, to call an elevator, to ring a doorbell, or even to turn on our mobile devices. Buttons have distinct response characteristics and are easily activated by touch. However, there is limited tactile feedback available for their digital counterparts displayed on touchscreens. Although mobile phones incorporate low-cost vibration motors to enhance touch-based interactions, it is not possible to generate complex tactile effects on ...

    View details for https://www.sciencedirect.com/science/article/pii/S107158191930120X

  • Tactile Perception of Virtual Textures Displayed by Friction Modulation via Ultrasonic Actuation IEEE Transactions on Haptics
    MK Saleem,C YilmazHide
    2019

    More

    Abstract

    of humans to discriminate two surfaces based on friction, but how humans perceive a change RF) and falling friction (FF), owing to the huge contrast in perception between them To create different tactile effects on the touch surface using ultrasonic actuation technique, vibration ...

    View details for https://ieeexplore.ieee.org/abstract/document/8883061/

  • Stroke-based sketched symbol reconstruction and segmentation IEEE Computer Graphics and Applications
    K Kaiyrbekov,M SezginHide
    2019

    More

    Abstract

    Hand-drawn objects usually consist of multiple semantically meaningful parts. For example, a stick figure consists of a head, a torso, and pairs of legs and arms. Efficient and accurate identification of these subparts promises to significantly improve algorithms for stylization, deformation, morphing and animation of 2D drawings. In this paper, we propose a neural network model that segments symbols into stroke-level components. Our segmentation framework has two main elements: a fixed feature extractor and a Multilayer Perceptron ...

    View details for https://arxiv.org/abs/1901.03427

  • Speech Driven Backchannel Generation using Deep Q-Network for Enhancing Engagement in Human-Robot Interaction Interspeech
    N Hussain,E Erzin,TM Sezgin,Y YemezHide
    2019

    More

    Abstract

    We present a novel method for training a social robot to generate backchannels during human-robot interaction. We address the problem within an off-policy reinforcement learning framework, and show how a robot may learn to produce non-verbal backchannels like laughs, when trained to maximize the engagement and attention of the user. A major contribution of this work is the formulation of the problem as a Markov decision process (MDP) with states defined by the speech activity of the user and rewards generated by ...

    View details for https://arxiv.org/abs/1908.01618

  • Special issue on intelligent interaction design AI EDAM
    P Biswas,P Orero,TM SezginHide
    2019

    More

    Abstract

    To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle Note you can select to send to either the @free.kindle.com or @kindle.com variations. '@free.kindle.com' emails are free but can only be sent to your device when it is connected ...

    View details for https://www.cambridge.org/core/journals/ai-edam/article/special-issue-on-intelligent-interaction-design/8B308F7F6951079C6E136BB4A64DBC2A

  • Sketch-based interaction and modeling: where do we stand? AI EDAM
    A Bonnici,A Akman,G Calleja,KP Camilleri,P FehlingHide
    2019

    More

    Abstract

    Sketching is a natural and intuitive communication tool used for expressing concepts or ideas which are difficult to communicate through text or speech alone. Sketching is therefore used for a variety of purposes, from the expression of ideas on two-dimensional (2D) physical media, to object creation, manipulation, or deformation in three-dimensional (3D) immersive environments. This variety in sketching activities brings about a range of technologies which, while having similar scope, namely that of recording and interpreting ...

    View details for https://www.cambridge.org/core/journals/ai-edam/article/sketchbased-interaction-and-modeling-where-do-we-stand/022179E6679E4C53F77B0DA6CBDBACE7

  • Interpretable Machine Learning for Generating Semantically Meaningful Formative Feedback. CVPR Workshops
    N Alyüz,TM SezginHide
    2019

    More

    Abstract

    We express our emotional state through a range of expressive modalities such as facial expressions, vocal cues, or body gestures. However, children on the Autism Spectrum experience difficulties in expressing and recognizing emotions with the accuracy of their neurotypical peers. Research shows that children on the Autism Spectrum can be trained to recognize and express emotions if they are given supportive and constructive feedback. In particular, providing formative feedback,(eg, feedback given by an expert describing how ...

    View details for http://openaccess.thecvf.com/content_CVPRW_2019/papers/Explainable%20AI/Alyuz_Interpretable_Machine_Learning_for_Generating_Semantically_Meaningful_Formative_Feedback_CVPRW_2019_paper.pdf

  • Fingerpad contact evolution under electrovibration J.R. Soc. Interface
    O Sirin,A Barrea,P LefèvreHide
    2019

    More

    Abstract

    Displaying tactile feedback through a touchscreen via electrovibration has many potential applications in mobile devices, consumer electronics, home appliances and automotive industry though our knowledge and understanding of the underlying contact mechanics are very limited. An experimental study was conducted to investigate the contact evolution between the human finger and a touch screen under electrovibration using a robotic set-up and an imaging system. The results show that the effect of electrovibration is only present ...

    View details for https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2019.0166

  • Deep Stroke-Based Sketched Symbol Reconstruction and Segmentation IEEE Computer Graphics and Applications
    K Kaiyrbekov,M SezginHide
    2019

    More

    Abstract

    Hand-drawn objects usually consist of multiple semantically meaningful parts. In this article, we propose a neural network model that segments sketched symbols into stroke-level components. Our segmentation framework has two main elements: a fixed feature extractor and a multilayer perceptron (MLP) network that identifies a component based on the feature. As the feature extractor we utilize an encoder of a stroke-rnn, which is our newly proposed generative variational auto-encoder (VAE) model that reconstructs symbols on a stroke-by ...

    View details for https://ieeexplore.ieee.org/abstract/document/8854308/

  • Batch Recurrent Q-Learning for Backchannel Generation Towards Engaging Agents 8th International Conference on Affective Computing and Intelligent Interaction (ACII)
    N Hussain,E Erzin,TM SezginHide
    2019

    More

    Abstract

    The ability to generate appropriate verbal and nonverbal backchannels by an agent during human-robot interaction greatly enhances the interaction experience. Backchannels are particularly important in applications like tutoring and counseling, which require constant attention and engagement of the user. We present here a method for training a robot for backchannel generation during a human-robot interaction within the reinforcement learning (RL) framework, with the goal of maintaining high engagement level. Since online learning ...

    View details for https://ieeexplore.ieee.org/abstract/document/8925443/

  • Tactile roughness perception of virtual gratings by electrovibration IEEE Transactions on Haptics
    A Isleyen,Y Vardar,C BasdoganHide
    2019

    More

    Abstract

    Realistic display of tactile textures on touch screens is a big step forward for haptic technology to reach a wide range of consumers utilizing electronic devices on a daily basis. Since the texture topography cannot be rendered explicitly by electrovibration on touch screens, it is important to understand how we perceive the virtual textures displayed by friction modulation via electrovibration. We investigated the roughness perception of real gratings made of plexiglass and virtual gratings displayed by electrovibration through a ...

    View details for https://ieeexplore.ieee.org/abstract/document/8933496/

  • VideoSketcher: Innovative Query Modes for Searching Videos through Sketches, Motion and Sound
    S Dupont,OC Altiok,A Bumin,C Dikmen,I GiangrecoHide
    2018

    More

    Abstract

    Dupont, Stéphane and Altiok, Ozan Can and Bumin, Aysegül and Dikmen, Ceren and Giangreco, Ivan and Heller, Silvan and Külah, Emre and Pironkov, Gueorgui and Rossetto, Luca and Sahillioglu, Yusuf and Schuldt, Heiko and Seddati, Omar and Setinkaya, Yusuf and Sezgin, Metin and Tanase, Claudiu and Toyan, Emre and Wood, Sean and Yeke, Doguhan. (2018) VideoSketcher: Innovative Query Modes for Searching Videos through Sketches, Motion and Sound. In: Proceedings of the eNTERFACE 2015 Workshop on Intelligent Interfaces ...

    View details for https://edoc.unibas.ch/68627/

  • The ASC-inclusion perceptual serious gaming platform for autistic children IEEE Transactions on Games
    E Marchi,B Schuller,A BairdHide
    2018

    More

    Abstract

    ''Serious games” are becoming extremely relevant to individuals who have specific needs, such as children with an autism spectrum condition (ASC). Often, individuals with an ASC have difficulties in interpreting verbal and nonverbal communication cues during social interactions. The ASC-Inclusion EU-FP7 funded project aims to provide children who have an ASC with a platform to learn emotion expression and recognition, through play in the virtual world. In particular, the ASC-Inclusion platform focuses on the expression of emotion ...

    View details for https://ieeexplore.ieee.org/abstract/document/8430561/

  • Sketch misrecognition correction system based on eye gaze monitoring US Patent 10,133,945
    TM Sezgin,O KalayHide
    2018

    More

    Abstract

    The present disclosure relates to a gaze based error recognition detection system that is intended to predict intention of the user to correct user drawn sketch misrecognitions through a multimodal computer based intelligent user interface. The present disclosure more particularly relates to a gaze based error recognition system comprising at least one computer, an eye tracker to capture natural eye gaze behavior during sketch based interaction, an interaction surface and a sketch based interface providing interpretation of ...

    View details for https://patents.google.com/patent/US10133945B2/en

  • Psychophysical evaluation of change in friction on an ultrasonically-actuated touchscreen IEEE Transactions on Haptics
    MK Saleem,C YilmazHide
    2018

    More

    Abstract

    To render tactile cues on a touchscreen by friction modulation, it is important to understand how humans perceive a change in friction. In this study, we investigate the relations between perceived change in friction on an ultrasonically actuated touchscreen and parameters involved in contact between finger and its surface. We first estimate the perceptual thresholds to detect rising and falling friction while a finger is sliding on the touch surface. Then, we conduct intensity scaling experiments and investigate the effect of finger sliding ...

    View details for https://ieeexplore.ieee.org/abstract/document/8352030/

  • Multimodal prediction of head nods in dyadic conversations 26th Signal Processing and Communications Applications Conference
    BB Türker,MT Sezgin,Y YemezHide
    2018

    More

    Abstract

    Non-verbal expressions in human interactions carry important messages. These messages, which constitute a significant part of the information to be transferred, are not used effectively by machines in human-robot/agent interaction. In this study, the purpose is to predict the potential head nod moments for robot/agent and therefore to develop more human-like interfaces. To achieve this, acoustic feature extraction and social signal annotations are carried out on human-human dyadic conversations. A certain history window for each head ...

    View details for https://ieeexplore.ieee.org/abstract/document/8404737/

  • Multifaceted engagement in social interaction with a machine: The joker project 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)
    L Devillers,S Rosset,GD DuplessisHide
    2018

    More

    Abstract

    This paper addresses the problem of evaluating engagement of the human participant by combining verbal and nonverbal behaviour along with contextual information. This study will be carried out through four different corpora. Four different systems designed to explore essential and complementary aspects of the JOKER system in terms of paralinguistic/linguistic inputs were used for the data collection. An annotation scheme dedicated to the labeling of verbal and non-verbal behavior have been designed. From our ...

    View details for https://ieeexplore.ieee.org/abstract/document/8373903/

  • Haptable: an interactive tabletop providing online haptic feedback for touch gestures IEEE transactions on Haptics
    SE Emgin,A Aghakhani,TM SezginHide
    2018

    More

    Abstract

    We present HapTable; a multi–modal interactive tabletop that allows users to interact with digital images and objects through natural touch gestures, and receive visual and haptic feedback accordingly. In our system, hand pose is registered by an infrared camera and hand gestures are classified using a Support Vector Machine (SVM) classifier. To display a rich set of haptic effects for both static and dynamic gestures, we integrated electromechanical and electrostatic actuation techniques effectively on tabletop surface of ...

    View details for https://ieeexplore.ieee.org/abstract/document/8409988/

  • Haptable: an interactive tabletop providing online haptic feedback for touch gestures IEEE transactions on Haptics
    SE Emgin,A Aghakhani,TM SezginHide
    2018

    More

    Abstract

    We present HapTable; a multi–modal interactive tabletop that allows users to interact with digital images and objects through natural touch gestures, and receive visual and haptic feedback accordingly. In our system, hand pose is registered by an infrared camera and hand gestures are classified using a Support Vector Machine (SVM) classifier. To display a rich set of haptic effects for both static and dynamic gestures, we integrated electromechanical and electrostatic actuation techniques effectively on tabletop surface of ...

    View details for https://ieeexplore.ieee.org/abstract/document/8409988/

  • Gaze-based predictive user interfaces: Visualizing user intentions in the presence of uncertainty International Journal of Human-Computer Interaction
    ÇÇ Karaman,TM SezginHide
    2018

    More

    Abstract

    Human eyes exhibit different characteristic patterns during different virtual interaction tasks such as moving a window, scrolling a piece of text, or maximizing an image. Human-computer studies literature contains examples of intelligent systems that can predict user's task-related intentions and goals based on eye gaze behavior. However, these systems are generally evaluated in terms of prediction accuracy, and on previously collected offline interaction data. Little attention has been paid to creating real-time interactive systems using ...

    View details for https://www.sciencedirect.com/science/article/pii/S1071581917301611

  • Contact mechanics between the human finger and a touchscreen under electroadhesion National Academy of Sciences of the United States of America (PNAS)
    M Ayyildiz,M Scaraggi,O SirinHide
    2018

    More

    Abstract

    The understanding and control of human skin contact against technological substrates is the key aspect behind the design of several electromechanical devices. Among these, surface haptic displays that modulate the friction between the human finger and touch surface are emerging as user interfaces. One such modulation can be achieved by applying an alternating voltage to the conducting layer of a capacitive touchscreen to control electroadhesion between its surface and the finger pad. However, the nature of the contact ...

    View details for https://www.pnas.org/content/115/50/12668.short

  • Communicative Cues for Reach-to-Grasp Motions: From Humans to Robots International Conference on Autonomous Agents and Multiagent Systems
    D Kebüde,C Eteke,TM Sezgin,B AkgünHide
    2018

    More

    Abstract

    Intent communication is an important challenge in the context of human-robot interaction. The aim of this work is to identify subtle non-verbal cues that make communication among humans fluent and use them to generate intent expressive robot motion. A humanhuman reach-to-grasp experiment (n= 14) identified two temporal and two spatial cues:(1) relative time to reach maximum hand aperture (MA),(2) overall motion duration (OT),(3) exaggeration in motion (Exg), and (4) change in grasp modality (GM). Results showed there ...

    View details for http://ifaamas.org/Proceedings/aamas2018/pdfs/p874.pdf

  • Audio-Visual Prediction of Head-Nod and Turn-Taking Events in Dyadic Interactions. Interspeech
    BB Türker,E Erzin,Y Yemez,TM SezginHide
    2018

    More

    Abstract

    Head-nods and turn-taking both significantly contribute conversational dynamics in dyadic interactions. Timely prediction and use of these events is quite valuable for dialog management systems in human-robot interaction. In this study, we present an audio-visual prediction framework for the head-nod and turntaking events that can also be utilized in real-time systems. Prediction systems based on Support Vector Machines (SVM) and Long Short-Term Memory Recurrent Neural Networks (LSTMRNN) are trained on human-human ...

    View details for https://iui.ku.edu.tr/wp-content/uploads/2018/06/is2018_cameraReady.pdf

  • Visualization literacy at elementary school Conference on Pen and Touch Technology in Education
    B Alper,NH Riche,F Chevalier,J BoyHide
    2017

    More

    Abstract

    This work advances our understanding of children's visualization literacy, and aims to improve it through a novel approach for teaching visualization at elementary school. We first contribute an analysis of data graphics and activities employed in grade K to 4 educational materials, and the results of a survey conducted with 16 elementary school teachers. We find that visualization education could benefit from integrating pedagogical strategies for teaching abstract concepts with established interactive visualization techniques. Building on ...

    View details for https://dl.acm.org/doi/abs/10.1145/3025453.3025877

  • Sketch-based articulated 3d shape retrieval IEEE Computer Graphics and Applications
    Y Sahillioğlu,M SezginHide
    2017

    More

    Abstract

    Sketch-based queries are a suitable and superior alternative to traditional text-and example-based queries for 3D shape retrieval. The authors developed an articulated 3D shape retrieval method that uses easy-to-obtain 2D sketches. It does not require 3D example models to initiate queries but achieves accuracy comparable to a state-of-the-art example-based 3D shape retrieval method. ...

    View details for https://ieeexplore.ieee.org/abstract/document/8103312/

  • Sketch recognition with few examples Computers & Graphics
    KT Yesilbek,TM SezginHide
    2017

    More

    Abstract

    Sketch recognition is the task of converting hand-drawn digital ink into symbolic computer representations. Since the early days of sketch recognition, the bulk of the work in the field focused on building accurate recognition algorithms for specific domains, and well defined data sets. Recognition methods explored so far have been developed and evaluated using standard machine learning pipelines and have consequently been built over many simplifying assumptions. For example, existing frameworks assume the presence of a fixed ...

    View details for https://www.sciencedirect.com/science/article/pii/S0097849317301516

  • Material Design in Augmented Reality with In-Situ Visual Feedback. EGSR (EI&I)
    W Shi,Z Wang,TM Sezgin,J Dorsey,HE RushmeierHide
    2017

    More

    Abstract

    Material design is the process by which artists or designers set the appearance properties of virtual surface to achieve a desired look. This process is often conducted in a virtual synthetic environment however, advances in computer vision tracking and interactive rendering now makes it possible to design materials in augmented reality (AR), rather than purely virtual synthetic, environments. However, how designing in an AR environment affects user behavior is unknown. To evaluate how work in a real environment influences the ...

    View details for https://iui.ku.edu.tr/sezgin_publications/2017/Sezgin-EGSR-2017.pdf

  • Characterizing user behavior for speech and sketch-based video retrieval interfaces Expressive 2017, Posters, Artworks, and Bridging Papers, the Eurographics Association
    OC Altıok,TM SezginHide
    2017

    More

    Abstract

    From a user interaction perspective, speech and sketching make a good couple for describing motion. Speech allows easy specification of content, events and relationships, while sketching brings in spatial expressiveness. Yet, we have insufficient knowledge of how sketching and speech can be used for motion-based video retrieval, because there are no existing retrieval systems that support such interaction. In this paper, we describe a Wizard-of-Oz protocol and a set of tools that we have developed to engage users in a sketch-and ...

    View details for https://dl.acm.org/doi/abs/10.1145/3092919.3122801

  • CHER-ish: A Sketch-and Image-based System for 3D Representation and Documentation of Cultural Heritage Sites. GCH
    V Rudakova,N Lin,N Trayan,TM Sezgin,J DorseyHide
    2017

    More

    Abstract

    We present a work-in-progress report on a sketch-and image-based software called “CHER-ish” designed to help make sense of the cultural heritage data associated with sites within 3D space. The software is based on the previous work done in the domain of 3D sketching for conceptual architectural design, ie, the system which allows user to visualize urban structures by a set of strokes located in virtual planes in 3D space. In order to interpret and infer the structure of a given cultural heritage site, we use a mix of data such as site ...

    View details for https://diglib.eg.org/bitstream/handle/10.2312/gch20171314/195-199.pdf?sequence=1&isAllowed=y

  • Audio-facial laughter detection in naturalistic dyadic conversations IEEE Transactions on Affective Computing
    BB Turker,Y Yemez,TM SezginHide
    2017

    More

    Abstract

    We address the problem of continuous laughter detection over audio-facial input streams obtained from naturalistic dyadic conversations. We first present meticulous annotation of laughters, cross-talks and environmental noise in an audio-facial database with explicit 3D facial mocap data. Using this annotated database, we rigorously investigate the utility of facial information, head movement and audio features for laughter detection. We identify a set of discriminative features using mutual information-based criteria, and show how they ...

    View details for https://ieeexplore.ieee.org/abstract/document/8046102/

  • Analysis of Engagement and User Experience with a Laughter Responsive Social Robot. Interspeech
    BB Türker,Z Buçinca,E Erzin,Y Yemez,TM SezginHide
    2017

    More

    Abstract

    We explore the effect of laughter perception and response in terms of engagement in human-robot interaction. We designed two distinct experiments in which the robot has two modes: laughter responsive and laughter non-responsive. In responsive mode, the robot detects laughter using a multimodal real-time laughter detection module and invokes laughter as a backchannel to users accordingly. In non-responsive mode, robot has no utilization of detection, thus provides no feedback. In the experimental design, we use a straightforward ...

    View details for https://188.166.204.102/archive/Interspeech_2017/pdfs/1395.PDF

  • iAutoMotion–an Autonomous Content-Based Video Retrieval Engine International Conference on MultiMedia Modeling
    L Rossetto,I Giangreco,C Tănase,H SchuldtHide
    2016

    More

    Abstract

    This paper introduces iAutoMotion, an autonomous video retrieval system that requires only minimal user input. It is based on the video retrieval engine IMOTION. iAutoMotion uses a camera to capture the input for both visual and textual queries and performs query composition, retrieval, and result submission autonomously. For the visual tasks, it uses various visual features applied to the captured query images; for the textual tasks, it applies OCR and some basic natural language processing, combined with object recognition. As the ...

    View details for https://link.springer.com/chapter/10.1007/978-3-319-27674-8_37

  • What Auto Completion Tells Us About Sketch Recognition Expressive 2016, Posters, Artworks, and Bridging Papers, the Eurographics Association
    OC Altıok,KT Yesilbek,TM SezginHide
    2016

    More

    Abstract

    Auto completion is generally considered to be a difficult problem in sketch recognition as it requires a decision to be made with fewer strokes. Therefore, it is generally assumed that the classification of fully completed object sketches should yield higher accuracy rates. In this paper, we report results from a comprehensive study demonstrating that the first few strokes of an object are more important than the lastly drawn ones. Once the first few critical strokes of a symbol are observed, recognition accuracies reach a plateau and may even ...

    View details for https://iui.ku.edu.tr/sezgin_publications/2016/SezginAltiok-SBIM-2016.pdf

  • Semantic sketch-based video retrieval with autocompletion Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI 2016)
    C Tanase,I Giangreco,L Rossetto,H SchuldtHide
    2016

    More

    Abstract

    The IMOTION system is a content-based video search engine that provides fast and intuitive known item search in large video collections. User interaction consists mainly of sketching, which the system recognizes in real-time and makes suggestions based on both visual appearance of the sketch (what does the sketch look like in terms of colors, edge distribution, etc.) and semantic content (what object is the user sketching). The latter is enabled by a predictive sketch-based UI that identifies likely candidates for the sketched ...

    View details for https://dl.acm.org/doi/abs/10.1145/2876456.2879473

  • IMOTION–searching for video sequences using multi-shot sketch queries 22nd International Conference on MultiMedia Modeling
    L Rossetto,I Giangreco,S Heller,C TănaseHide
    2016

    More

    Abstract

    This paper presents the second version of the IMOTION system, a sketch-based video retrieval engine supporting multiple query paradigms. Ever since, IMOTION has supported the search for video sequences on the basis of still images, user-provided sketches, or the specification of motion via flow fields. For the second version, the functionality and the usability of the system have been improved. It now supports multiple input images (such as sketches or still frames) per query, as well as the specification of objects to be present within ...

    View details for https://link.springer.com/chapter/10.1007/978-3-319-27674-8_36

  • Building a Gold Standard for Perceptual Sketch Similarity
    S Cakmak,TM SezginHide
    2016

    More

    Abstract

    Similarity is among the most basic concepts studied in psychology. Yet, there is no unique way of assessing similarity of two objects. In the sketch recognition domain, many tasks such as classification, detection or clustering require measuring the level of similarity between sketches. In this paper, we propose a carefully designed experiment setup to construct a gold standard for measuring the similarity of sketches. Our setup is based on table scaling, and allows efficient construction of a measure of similarity for large datasets containing ...

    View details for https://iui.ku.edu.tr/sezgin_publications/2016/SezginCakmak-SBIM-2016.pdf

KUIS AI on Social Media
Download Our Mobile App
Contact

Rumelifeneri Yolu 34450
Sarıyer, İstanbul / Türkiye

ai-admissions@ku.edu.tr
Phone (central): 0212 338 1000 Fax: +90 212 338 1205
Access to Campuses and
Transportation Services
© 2021 Koç University
When you visit a website, information is stored in your browser, mostly in the form of cookies. This information may be about you, your preferences or your device, and is often used to make the site work as you expect it to. The information does not usually identify you directly, it is meant to provide you with a more personalized web experience. You can choose not to allow some cookies. Click on the different category headings to learn more and change our default settings. Cookie Notice
Necessary Cookies
These cookies are necessary for the website to function and cannot be turned off in our systems.
Statistical Cookies
These cookies are used to provide insight into how we can improve our service to all our users and to understand how you interact with our website as an anonymous user.
Targeting Cookies
These cookies are used to create your profile and provide ads relevant to your interests. It is also used to limit the number of times you see an ad, as well as help measure the effectiveness of the ad campaign.
Cookie Policy
Cookies are used to personalize content and ads, to provide social media features and to analyze our traffic. You can accept all cookies with the "Allow All" option or you can edit the settings with the "Customize Settings" option.
Customize Settings