Funded by: Google LLC.
Dates: 2020-2021
Principal Investigators: Alper Erdoğan and Deniz Yüret
Lorem ipsum dolor sit amet consectetur adipisicing elit. Optio odit, neque molestiae voluptatum illo, ducimus quo cupiditate vitae assumenda ex debitis ullam necessitatibus deleniti sapiente repellendus quae eligendi, corporis eos.
Funded by: European Research Council
Dates: 2017-2021
Principal Investigator: Erdem Yörük and Deniz Yüret
Can we say that emerging market economies are developing a new welfare regime? If so, what has caused this?
This project has two hypotheses:
Hypothesis 1: China, Brazil, India, Indonesia, Mexico, South Africa and Türkiye are forming a new welfare regime that differs from liberal, corporatist and social democratic welfare regimes of the global north on the basis of expansive, and decommodifying social assistance programmes for the poor.
Hypothesis 2: This new welfare regime is emerging principally as a response to the growing political power of the poor as a dual source of threat and support for governments.
The project challenges and expands the state-of-the-art in three different literatures by developing novel concepts and approaches:
Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2019-2021
Principal Investigator: Didem Unat
The aim of the project is to perform effortless parallelization on deep neural networks. Hybrid-parallel approaches that blend the data and the model, especially the model-parallel approach, will be applied automatically on the model and various program improvements will be performed. The project will develop a series of optimization techniques that will enable the user to use the devices in the current hardware system efficiently without any code changes according to the topology and the structure of the deep neural network trained. Suggested improvements will be implemented on popular deep learning frameworks such as TensorFlow and MXNet, which illustrate deep neural network models as a data-flow graph.
Funded by: Scientific and Technological Research Council of Türkiye (TÜBİTAK)
Dates: 2018-2021
Principal Investigator: Çağatay Başdoğan
Capacitive touch screens are indispensable part of smart phones, tablets, kiosks, and laptop computers in nowadays. They are used to detect our finger position and enable us to interact with text, images, and data displayed by the above devices. To further improve these interactions, there is a growing interest in research community for displaying active tactile feedback to users through the capacitive screens. One approach followed for this purpose is to control the friction force between finger pad of user and the screen via electrostatic actuation. If an alternating voltage is applied to the conductive layer of a touch screen, an attraction force is generated between the finger and its surface. This force modulates the friction between the surface and the skin of the finger moving on it. Hence, one can generate different haptic effects on a touch screen by controlling the amplitude, frequency and waveform of this input voltage. These haptic effects could be used to develop new intelligent user interfaces, in applications involving education, data visualization, and digital games. However, this area of research is new and we do not fully understand the electro-mechanical interactions between human finger and the touch screen actuated by electrostatic forces and the effect of these interactions on our haptic perception yet. Hence, the aim of this project is to investigate the electromechanical interactions between human finger and an electrostatically actuated touch screen in depth. In particular, we will investigate the effect of following factors on the frictional forces between finger and the screen; a) amplitude of the voltage applied to the conductive layer of the touch screen, b) the normal force applied by finger on the touch screen, and c) effect of finger speed. The results of this study will not only enable us to better understand the physics of interactions between human finger and a touch screen actuated by electrostatic forces from a scientific point of view, but will also provide us with guidelines on how to program a touch screen to generate desired haptic effects for various applications.
Funded by: Scientific and Technological Research Council of Türkiye (TÜBİTAK)
Dates: 2018-2021
Principal Investigator: Barış Akgün
Robots and related component technologies are getting more capable, affordable and accessible. With the advent of safe collaborative robotic arms, the number of “cage-free” robots are increasing. However, as they become more ubiquitous, the range of tasks and environments they face grow more complex. Many of these environments, such as households, machine-shops, hospitals, and schools, contain people having a wide range of preferences, expectations, assumptions, and level of technological savviness. Future robot users will want to customize their robot behavior and add new ones. Thus it is not practical to program robots for all the scenarios that they will
face when they are deployed. The field of Learning from Demonstration (LfD) emerged as answer to this challenge, with the vision of programming robots through demonstrations of the desired behavior instead of explicit
programming. Most existing LfD approaches learn a new skill from scratch, but there will inevitably be many skills required from the robot. After a certain point, teaching each skill like this would get tedious. Instead, the robot should transfer knowledge from its already learned skills. The aim of this project is to learn robotic skills from non-robotics experts and use previously learned skills to either speed up learning or increase generalization. Towards this end, this project investigates three topics; (1) design a joint action-goal model to facilitate transfer learning, (2) feature learning for skill transfer and (3) improve existing interactions for LfD or develop new ones for transfer learning.
Funded by: Scientific and Technological Research Council of Türkiye (TÜBİTAK)
Dates: 2018-2021
Principal investigator: Çağatay Başdoğan
In the near future, humans and robots are expected to perform collaborative tasks involving physical interaction in various different environments such as homes, hospitals, and factories. One important research topic in physical Human-Robot Interaction (pHRI) is to develop natural haptic communication between the partners. Although there is already a large body of studies in the area of human-robot Interaction, the number of studies investigating the physical interaction between the partners and in particular the haptic communication are limited and the interaction in such systems is still artificial when compared to natural human-human collaboration. Although the collaborative tasks involving physical interaction such as assembly/disassembly of parts and transportation of an object can be planned and executed naturally and intuitively by two humans, there are unfortunately no robots in the market that can collaborate and perform the same tasks with us. In this project, we propose a fractional order adaptive control for the pHRI systems. The main goal of the project is to adapt the admittance parameters of the robot in real-time during the task, based on the changes in human and environment impedances, while balancing the tradeoff between the stability and the transparency of the coupled system. To the best of our knowledge, there is no earlier study in the literature utilizing a fractional order admittance controller for pHRI. Compared to an integer order controller, a fractional order controller enables to use fractional order derivative and integrator, which will bring us flexibility in modeling and controlling the dynamics of physical interactions between the human operator and the robot. Moreover, there is no study in literature investigating the real-time adaptation of the control parameters of a fractional order admittance controller via machine learning algorithms. Machine learning algorithms will enable us to learn from data iteratively to estimate human intention during the task and then select control parameters accordingly to optimize the task performance.
Funded by: Scientific and Technological Research Council of Türkiye (TÜBİTAK)
Dates: 2018-2021
Principal Investigator: Murat Tekalp
The advent of deep learning is changing how we do 2D/3D image/video processing, including image/video restoration, interpolation, super-resolution, motion analysis/tracking, and compression, and light-field and hologram processing. Various deep neural network (DNN) architectures, such as convolutional neural networks (CNN), auto-encoders, recurrent neural networks (RNN), generative adversarial networks (GAN) have already been applied to different image/video processing problems. The question then arises whether data-driven deep networks and associated learning algorithms have become the preferred dominant solution to all image/video processing problems, in contrast to the traditional human-engineered, hand-crafted algorithms using domain-specific signals-systems models. The answer to this question is almost surely affirmative and deep image/video processing methods are poised to replace a large part of traditional image/video processing pipeline.
Yet, deep signal processing is a very young field, the science of DNN and how they produce such amazing image/video processing results are not sufficiently well understood and more research is needed for a clear theoretical understanding of which DNN architectures work best for what image/video processing problems and how can we obtain much better and more stable results. The current successes of deep learning in image/video processing are experimentally-driven by more-or-less on trial and error. There are several open challenges, e.g., IMAGENet large scale visual recognition, visual object tracking (VOT), large scale activity recognition (ActivityNet), and single-image super-resolution (NTIRE), and a different network architecture wins the competition in different challenges each year. Few formal works exist to understand the mathematics behind this.
This project will explore the potential for breakthrough in image and video processing using new deep learning algorithms, guided by machine-learned signal models. We believe that the relatively less studied areas of residual learning, adversarial learning, and reinforcement learning offer high-potential for image and video processing. This project will investigate some fundamental questions within a formal framework and explore the potential for further breakthrough in image/video processing, including problems that have not been addressed by using DNN, such as motion-compensated video processing, video compression, and light-field and hologram processing/compression, using deep learning guided by big-data-driven learned signal models. The proposed research is groundbreaking because it brings in new ideas, which can revolutionize the way we do image/video processing rendering some of the traditional algorithms obsolete.
Funded by: Scientific and Technological Research Council of Türkiye (TÜBİTAK)
Dates: 2017-2020
Principal Investigator: Aykut Erdem
Lorem ipsum dolor sit amet consectetur adipisicing elit. Optio odit, neque molestiae voluptatum illo, ducimus quo cupiditate vitae assumenda ex debitis ullam necessitatibus deleniti sapiente repellendus quae eligendi, corporis eos.
Funded by: Scientific and Technological Research Council of Türkiye (TÜBİTAK)
Dates: 20018-2020
Principal Investigator: Engin Erzin
Lorem ipsum dolor sit amet consectetur adipisicing elit. Optio odit, neque molestiae voluptatum illo, ducimus quo cupiditate vitae assumenda ex debitis ullam necessitatibus deleniti sapiente repellendus quae eligendi, corporis eos.
Funded by: Saudi Aramco
Dates: 20017-2020
Principal Investigator: Didem Unat
Lorem ipsum dolor sit amet consectetur adipisicing elit. Optio odit, neque molestiae voluptatum illo, ducimus quo cupiditate vitae assumenda ex debitis ullam necessitatibus deleniti sapiente repellendus quae eligendi, corporis eos.