SparCity: An Optimization and Co-Design Framework for Sparse Computation

Funded by: European Commission (H2020-JTI-EuroHPC-2019)
Dates: 2021-2023
Principal Investigator: D. Unat

Read the abstract

Perfectly aligned with the vision of the EuroHPC Joint Undertaking, the SparCity project aims at creating a supercomputing framework that will provide efficient algorithms and coherent tools specifically designed for maximising the performance and energy efficiency of sparse computations on emerging HPC systems, while also opening up new usage areas for sparse computations in data analytics and deep learning.  The framework enables comprehensive application characterization and modeling, performing synergistic node-level and system-level software optimizations. By creating a digital SuperTwin, the framework is also capable of evaluating existing hardware components and addressing what-if scenarios on emerging architectures and systems in a co-design perspective. To demonstrate the effectiveness, societal impact, and usability of the framework, the SparCity project will enhance the computing scale and energy efficiency of four challenging real-life applications that come from drastically different domains, namely, computational cardiology, social networks, bioinformatics and autonomous driving. By targeting this collection of challenging applications, SparCity will develop world-class, extreme scale and energy-efficient HPC technologies, and contribute to building a sustainable exascale ecosystem and increasing Europe’s competitiveness.

Tangible Intelligent Interfaces for Teaching Computational Thinking Skills

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2020-2022
Researchers: A. Sabuncuoglu and T. M. Sezgin (PI)

Read the abstract

The aim of the project is to develop an intelligent application that supports programming education and to create protocols that enable effective use of these applications in schools via in-class pilot studies. To achieve this goal, our project has two main objectives:

  1. To develop an accessible, innovative, low-cost and interactive programming application to be used in programming education. To support physical interactions for encouraging active and natural interactions.
  2. To evaluate the contribution of our application to programming education in a classroom environment through pilot studies conducted in schools.

Exact dynamics of online and distributed learning algorithms for large-scale nonconvex optimization problems

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2020-2022
Principal Investigator: Z. Doğan

Read the abstract

We are experiencing a data-driven revolution at the moment with data being collected at an unprecedented rate. In particular, there is an increasing excitement toward autonomous systems with learning capabilities. Several data-driven applications have already shown significant practical benefit revealing the power of having access to more data, e.g., health care systems, self-driving cars, instant machine translation, and recommendation systems. However, large acceptance of such systems heavily depends on their stability, tractability and reproducibility, where current applications fall inadequate in providing such features. The scale and complexity of modern datasets often render classical data processing techniques infeasible, and therefore, several new algorithms are required to address new technical challenges associated with the nature of the data.

This project focuses on developing efficient and tractable solutions for large-scale learning problems encountered in machine learning and signal processing. Apart from theoretical aspects, the project bears specific goals targeted to applications in principal subspace estimation, low-rank matrix factorization, tensor decomposition and deep learning for largescale systems. Specifically, this novel approach brings together several attractive features:

  • The emerging concept of online-learning will be adapted to a distributed setting across a decentralized network topology.
  • The exact dynamics of the algorithms will be extracted by a stochastic process analysis method; which current state-of-the-art methods are not able to deliver.
  • Studying the extracted dynamics, the learning capabilities and performances of large-scale systems will be improved to match the current needs and challenges of the modern data-driven applications.

Analysis of training dynamics on artificial neural networks using methods of non-equilibrium thermodynamics

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2020-2022
Researchers: A. Kabakçıoğlu (PI) and D. Yuret

Read the abstract

The interface between physics and machine learning is a fast-growing field of research. While most studies in this frontier involve using deep learning methods to extract physical knowledge from experimental data or theoretical models, relatively little is known about the nontrivial dynamics of training on artificial neural networks. The analytical framework of nonequilibrium physics developed in the last two decades provides versatile tools that find a novel application in this context. The dynamics of machine learning displays some interesting features not found in physical systems, such as a nontrivial noise structure and a resulting non-thermal steady state, as we observed in our preliminary investigations. The proposed study aims to apply the know-how existing in the nonequilibrium physics literature to this modern problem and explore the implications of various universal laws (originally devised for microscopic systems and expressed in the language of statistical physics) for machine learning. We plan to employ well-known machine learning problems, such as MNIST or CIFAR, as well as some toy models as the testground for analytical predictions. The research team is composed of Dr. Deniz Yuret (Koç Univ) who is an expert in machine learning and the developer of the deep learning package (Knet) for the increasingly popular Julia platform, Dr. Michael Hinczewski (CWRU, USA) who has made important contributions to the literature on nonequilibrium aspects of biological systems, and Dr. Alkan Kabakçıoğlu (Koç Univ, PI), a computational statistical physicist whose recent studies focus on fluctuations and scaling properties of nonequilibrium processes in biomolecules. The proposed research will be conducted in the Department of Physics at Koç University and is expected to last two years.

Video Understanding for Autonomous Driving

Funded by:  European Research Commission
Dates: 2020-2022
Researchers: F. Güney (PI) and D. Yuret

Read the abstract

The researcher Dr. Fatma Guney will carry out a fellowship to create the technology needed to understand the content of videos in a detailed, human-like manner, superseding the current limitations of static image understanding methods, and enabling more robust perception for autonomous driving agents. This fellowship will be carried out at Koc University under the supervision of Prof. Deniz Yuret. Understanding surrounding scene with a detailed and human-level reliability is essential to address complex situations in autonomous driving. State-of-the-art machine vision systems are very good at analysing static images by detecting and segmenting objects on a single image but relating them across time in a video still remains a challenge. Our goal in this proposal is to extend the success of static image understanding to the temporal domain by equipping machines with the ability to interpret videos by exploiting both appearance and motion cues with a human-like ability. There are several challenges which make it difficult for machines such as cluttered backgrounds, the variety and complexity of motion, and partial occlusions due to people interacting with each other.

Tracing the Ruin: Modelling the Collapse Process of Ancient Structures at Sagalassos (Ağlasun, Burdur)

Funded by: Koç University Seed Fund
Dates: 2020-2022
Researchers: Inge Uytterhoeven (PI) and Fatma Güney

Read the abstract

As a proof of concept this project, proposed by Assoc. Prof. Dr. Inge Uytterhoeven (Department of Archaeology and History of Art) in collaboration with Assist. Prof. Dr. Fatma Güney (Department of Computer Science and Engineering), intends to model the collapse of ancient structures caused by earthquakes, combining research approaches of Archaeology, Architecture, Computer Engineering, Archaeoseismology, Conservation, and Cultural Heritage, and taking the archaeological site of Sagalassos (Ağlasun, Burdur) as a test case. The project aims to develop a large number of realistic simulations of the distortion, displacement and tumbling down of building elements of a set of ancient structures with different architectural characteristics at Sagalassos. In this way, it intends to offer an innovative methodology to learn the dynamics of physics, causing the collapse in seismic calamities. Moreover, we hope to discriminate between various seismic events that may have followed each other through time, as well as to distinguish between earthquake damage and other processes of structural decay that impacted ancient structures, including the salvaging of building materials for recycling purposes or natural gradual processes of decay. Furthermore, the simulations aim to contribute to the fields of conservation and anastylosis by giving insights into the position, orientation, and extent of collapsed building elements in relation to the structures they belonged to, and into the impact of future earthquakes on rebuilt structures. Finally, the project aims to contribute to the visualisation for the broad public of the effects of seismic activity on ancient urban societies.

Effortless Parallelization of Deep Learning Frameworks

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2019-2021
Principal Investigator: D. Unat

Read the abstract

The aim of the project is to perform effortless parallelization on deep neural networks. Hybrid-parallel approaches that blend the data and the model, especially the model-parallel approach, will be applied automatically on the model and various program improvements will be performed. The project will develop a series of optimization techniques that will enable the user to use the devices in the current hardware system efficiently without any code changes according to the topology and the structure of the deep neural network trained. Suggested improvements will be implemented on popular deep learning frameworks such as TensorFlow and MXNet, which illustrate deep neural network models as a data-flow graph.

Image and Video Processing by Deep Learning

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2018-2021
Researchers: G. Bakar, O. Kırmemiş, O. Keleş, M. A. Yılmaz, and A. M. Tekalp (PI)

Read the abstract

The advent of deep learning is changing how we do 2D/3D image/video processing, including image/video restoration, interpolation, super-resolution, motion analysis/tracking, and compression, and light-field and hologram processing. Various deep neural network (DNN) architectures, such as convolutional neural networks (CNN), auto-encoders, recurrent neural networks (RNN), generative adversarial networks (GAN) have already been applied to different image/video processing problems. The question then arises whether data-driven deep networks and associated learning algorithms have become the preferred dominant solution to all image/video processing problems, in contrast to the traditional human-engineered, hand-crafted algorithms using domain-specific signals-systems models. The answer to this question is almost surely affirmative and deep image/video processing methods are poised to replace a large part of traditional image/video processing pipeline.

Yet, deep signal processing is a very young field, the science of DNN and how they produce such amazing image/video processing results are not sufficiently well understood and more research is needed for a clear theoretical understanding of which DNN architectures work best for what image/video processing problems and how can we obtain much better and more stable results. The current successes of deep learning in image/video processing are experimentally-driven by more-or-less on trial and error. There are several open challenges, e.g., IMAGENet large scale visual recognition, visual object tracking (VOT), large scale activity recognition (ActivityNet), and single-image super-resolution (NTIRE), and a different network architecture wins the competition in different challenges each year. Few formal works exist to understand the mathematics behind this.

This project will explore the potential for breakthrough in image and video processing using new deep learning algorithms, guided by machine-learned signal models. We believe that the relatively less studied areas of residual learning, adversarial learning, and reinforcement learning offer high-potential for image and video processing. This project will investigate some fundamental questions within a formal framework and explore the potential for further breakthrough in image/video processing, including problems that have not been addressed by using DNN, such as motion-compensated video processing, video compression, and light-field and hologram processing/compression, using deep learning guided by big-data-driven learned signal models. The proposed research is groundbreaking because it brings in new ideas, which can revolutionize the way we do image/video processing rendering some of the traditional algorithms obsolete.

Adaptive Fractional Order Controller Design Using Machine Learning for Physical Human-Robot Interaction

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2018-2021
Principal Investigator: Ç. Başdoğan

Read the abstract

In the near future, humans and robots are expected to perform collaborative tasks involving physical interaction in various different environments such as homes, hospitals, and factories. One important research topic in physical Human-Robot Interaction (pHRI) is to develop natural haptic communication between the partners. Although there is already a large body of studies in the area of human-robot Interaction, the number of studies investigating the physical interaction between the partners and in particular the haptic communication are limited and the interaction in such systems is still artificial when compared to natural human-human collaboration. Although the collaborative tasks involving physical interaction such as assembly/disassembly of parts and transportation of an object can be planned and executed naturally and intuitively by two humans, there are unfortunately no robots in the market that can collaborate and perform the same tasks with us. In this project, we propose a fractional order adaptive control for the pHRI systems. The main goal of the project is to adapt the admittance parameters of the robot in real-time during the task, based on the changes in human and environment impedances, while balancing the tradeoff between the stability and the transparency of the coupled system. To the best of our knowledge, there is no earlier study in the literature utilizing a fractional order admittance controller for pHRI. Compared to an integer order controller, a fractional order controller enables to use fractional order derivative and integrator, which will bring us flexibility in modeling and controlling the dynamics of physical interactions between the human operator and the robot. Moreover, there is no study in literature investigating the real-time adaptation of the control parameters of a fractional order admittance controller via machine learning algorithms. Machine learning algorithms will enable us to learn from data iteratively to estimate human intention during the task and then select control parameters accordingly to optimize the task performance.

Investigation of Friction Between Human Finger and Surface of a Capacitive Touch Screen Actuated by Electrostatic Forces for Haptic Feedback

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2018-2021
Principal Investigator: Ç. Başdoğan

Read the abstract

Capacitive touch screens are indispensable part of smart phones, tablets, kiosks, and laptop computers in nowadays. They are used to detect our finger position and enable us to interact with text, images, and data displayed by the above devices. To further improve these interactions, there is a growing interest in research community for displaying active tactile feedback to users through the capacitive screens. One approach followed for this purpose is to control the friction force between finger pad of user and the screen via electrostatic actuation. If an alternating voltage is applied to the conductive layer of a touch screen, an attraction force is generated between the finger and its surface. This force modulates the friction between the surface and the skin of the finger moving on it. Hence, one can generate different haptic effects on a touch screen by controlling the amplitude, frequency and waveform of this input voltage. These haptic effects could be used to develop new intelligent user interfaces, in applications involving education, data visualization, and digital games. However, this area of research is new and we do not fully understand the electro-mechanical interactions between human finger and the touch screen actuated by electrostatic forces and the effect of these interactions on our haptic perception yet. Hence, the aim of this project is to investigate the electromechanical interactions between human finger and an electrostatically actuated touch screen in depth. In particular, we will investigate the effect of following factors on the frictional forces between finger and the screen; a) amplitude of the voltage applied to the conductive layer of the touch screen, b) the normal force applied by finger on the touch screen, and c) effect of finger speed. The results of this study will not only enable us to better understand the physics of interactions between human finger and a touch screen actuated by electrostatic forces from a scientific point of view, but will also provide us with guidelines on how to program a touch screen to generate desired haptic effects for various applications.

Haptic Feedback for Hand Gestures Detected by an Interactive Touch Table

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2018-2021
Principal Investigator: Ç. Başdoğan

Read the abstract

In today’s world, we use touch interfaces frequently -on our tablets, smart phones, and even on our computers. Unfortunately, current available touch screen interfaces can only recognize a small subset of hand motions that we perform daily and cannot convey the haptic feedback resulting from these motions. For example, hand motions such as moving fingers towards/away from each other (zooming in/out), sliding one finger (turning a page), and holding one finger on a specific location (button clicking, holding) are recognized by current systems. In such systems, designers assign actions (button click, hold) to hand motions(holding one finger at a specific location) that are recognized easily and this prevents us from having a natural interaction with computers. However, the aim of human-computer interaction is to be able to provide the closest possible interaction to human-human and human-object interaction and to produce interfaces that human can naturally interact with. Therefore, it is crucial to have interfaces that can recognize hand motions and provide suitable feedbacks to sensory channels such as auditory, visual, and haptic that result from these hand motions.

In order to improve human-computer interaction, our project aims to create an interactive table that can detect people’s natural hand motions while they are in contact with the table surface, recognize these motions, and provide appropriate audio, visual, and haptic feedback. Therefore, our project contributes to two main research areas –recognizing natural hand motions and haptic feedback. The table proposed in this project will have 3 main functions: a) detection of natural hand motions by computer, b) recognition of natural hand motions, and c) providing suitable haptic feedback to the user that results from his/her hand motions.

The New Politics of Welfare: Towards an “Emerging Markets”

Funded by: European Research Council
Dates: 2017-2021
Researchers: E. Yörük (PI) and D. Yuret

Read the abstract

Can we say that emerging market economies are developing a new welfare regime? If so, what has caused this?

This project has two hypotheses:
Hypothesis 1: China, Brazil, India, Indonesia, Mexico, South Africa and Turkey are forming a new welfare regime that differs from liberal, corporatist and social democratic welfare regimes of the global north on the basis of expansive, and decommodifying social assistance programmes for the poor.
Hypothesis 2: This new welfare regime is emerging principally as a response to the growing political power of the poor as a dual source of threat and support for governments.
The project challenges and expands the state-of-the-art in three different literatures by developing novel concepts and approaches:

  1. Welfare regimes (by adding a new type of welfare regime – Hypothesis 1),
  2. Welfare-state development (by re-establishing the centrality of political factors– Hypothesis 2)
  3. Contentious politics (by showing that welfare policy changes are by-products of contemporary contentious politics – Hypothesis 2).

Backchannel Feedback Modeling for Human-Computer Interaction

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2018-2020
Researchers: N. Hussein, B. B. Türker, T. Numanoğlu, E. Kesim, Ö. Z. Bayramoğlu, E. Erzin (PI), Y. Yemez, and T. M. Sezgin

Read the abstract

Social robots are expected to understand and behave accordingly, just like humans. Endowing robots with the capability of monitoring user engagement during their interactions with humans is one of the crucial steps towards achieving this goal. In this project, we investigate learning methods to generate head nods and smiles as backchannels to increase the naturalness of the interaction and to engage humans in human-robot interaction (HRI). In doing so, we also con- sider user engagement as a part of the learning problem, which was not considered in previous learning-based approaches. User engagement is defined objectively over connection events, such as backchannels, mutual facial gaze and adjacency pair. We develop systems to track user engagement in HRI scenarios both offline and in real-time. We introduce the sequential-random deep Q-network (SRDQN) method to learn a policy for backchannel generation that explicitly maximizes user engagement. When evaluated using off-policy policy evaluation techniques, our SRDQN method outperforms the existing vanilla Q-learning methods. Furthermore, we conduct a human-robot experiment with the Furhat robot to verify the effectiveness of SRDQN in a real-time system. The experiment is designed to create an interactive social activity with a robot and involves playing a story-shaping game. The engagement of the human subjects is computed based on audio-visual sensory data. The subjective feedback from participants and the engagement values strongly indicate that our framework is a step forward towards the autonomous learning of a socially acceptable backchannel generation behavior.

Transfer Learning of Robotic Skills from Naïve User Demonstrations

Funded by: the Scientific and Technological Research Council of Turkey – TÜBİTAK
Dates: 2017-2020
Principal Investigator: B. Akgün

Read the abstract

Robots and related component technologies are getting more capable, affordable and accessible. With the advent of safe collaborative robotic arms, the number of “cage-free” robots are increasing. However, as they become more ubiquitous, the range of tasks and environments they face grow more complex. Many of these environments, such as households, machine-shops, hospitals, and schools, contain people having a wide range of preferences, expectations, assumptions, and level of technological savviness. Future robot users will want to customize their robot behavior and add new ones. Thus it is not practical to program robots for all the scenarios that they will
face when they are deployed. The field of Learning from Demonstration (LfD) emerged as answer to this challenge, with the vision of programming robots through demonstrations of the desired behavior instead of explicit
programming. Most existing LfD approaches learn a new skill from scratch, but there will inevitably be many skills required from the robot. After a certain point, teaching each skill like this would get tedious. Instead, the robot should transfer knowledge from its already learned skills. The aim of this project is to learn robotic skills from non-robotics experts and use previously learned skills to either speed up learning or increase generalization. Towards this end, this project investigates three topics; (1) design a joint action-goal model to facilitate transfer learning, (2) feature learning for skill transfer and (3) improve existing interactions for LfD or develop new ones for transfer learning.