ENSURE: Enabling Self-Driving in Uncertain Real Environments

Funded by: European Research Council
Dates: 2023-2028
Principal Investigator: F. Güney

Read the abstract

In ENSURE, the goal is to understand the dynamics of driving with different types of uncertainty to achieve safe self-driving in complex real world situations. These include uncertainties due to unknown intentions of other agents such as negotiation scenarios at intersections as well as uncertainties due to modelling errors, for example failing to predict future correctly due to an unknown object on the road. The 5-year project prioritizes safety and explainability of self-driving technology to increase its applicability in real world with state of the art deep learning techniques.

Leveraging Neuromarkers for Next-Generation Immersive Systems

Funded by:European Commission (ERA‐Net Program)
Dates: 2023-2026
Principal Investigator: M. Sezgin

Read the abstract

Brain-Computer Interfaces (BCIs) enable the leveraging of cerebral activity of users in order to interact with computer systems. Originally designed for assisting muscularly disabled users, a new trend is emerging towards the use of BCI for a larger audience using passive BCI systems, which are able to transparently provide information regarding the users’ mental states. Virtual Reality (VR) technology could largely benefit from inputs provided by passive BCIs. VR enables to immerse users in 3D computer-generated environments, in a way to feel present in the virtual space, allowing through complete control of the environment, to offer several applications ranging from training and education, to social networking and entertainment. Given the growing interest of society and major industrial groups‘ investments, VR is considered as a major revolution in Human-Computer Interaction.
However, to this day, VR has not yet reached its predicted level of democratization and largely remains at the state of an entertaining experiment. This can be explained by the difficulty to characterize users’ mental state during interaction and the inherent lack of adaptation in the presentation of the virtual content. In fact, studies have shown that users experience VR in different ways. While approximately 60% of users experience “cybersickness”, which represents the set of deleterious symptoms that may occur after a prolonged use of virtual reality systems, users can also suffer from breaks in presence and immersion, due to rendering and interaction anomalies which can lead to a poor feeling of embodiment and incarnation towards their virtual avatars. In both cases user’s experience is severely impacted as VR experience strongly relies on the concepts of telepresence and immersion.
The aim of this project is to pave the way to the new generation of VR systems leveraging the electrophysiological activity of the brain through a passive BCI to level-up the immersion in virtual environments. The objective is to provide VR systems with means to evaluate the users’ mental states through the real-time classification of EEG data. This will improve user’s immersion in VR by reducing or preventing cybersickness, and by increasing levels of embodiment through the real time adaptation of the virtual content to the users’ mental states as provided by the BCI.
In order to reach this objective, the proposed methodology is to i) investigate neurophysiological markers associated with early signs of cybersickness, as well as neuromarkers associated with the occurrence of VR anomalies; ii) build on existing signal processing methods for the real-time classification of these markers associating them with corresponding mental states and iii) provide mechanisms for the adaptation of the virtual content to the estimated mental states.

Extensions of An Information Theoretic Framework for Self-Supervised Learning

Funded by:Google
Dates: 2022-2023
Principal Investigators: A. Erdoğan and D. Yuret

Read the abstract

Lorem ipsum dolor sit amet consectetur adipiscing elit. Optio odit, neque molestiae voluptatum illo, ducimus quo cupiditate vitae assumenda ex debitis ullam necessitatibus deleniti sapiente repellendus quae eligendi, corporis eos.

3D Sonification for Localization of Seizure Localization

Funded by:Health Institutes of Turkey – TÜSEB
Dates: 2022-2023
Researchers: S. Karamürsel (PI), M. Sezgin, Y. Yemez

Read the abstract

Lorem ipsum dolor sit amet consectetur adipiscing elit. Optio odit, neque molestiae voluptatum illo, ducimus quo cupiditate vitae assumenda ex debitis ullam necessitatibus deleniti sapiente repellendus quae eligendi, corporis eos.

Artificial Intelligence Aided Detergent Formula Design and Performance Optimization

Funded by: Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2022-2024
Principal Investigator: M. Sezgin

Read the abstract

Hızlı tüketim alışkanlıkları artışı ile yapay zeka teknolojisinin iş entegrasyonun sağlanması birçok inovatif çözümleri beraberinde getirecektir.

Projemizde geçmiş çoklu formülasyon girdileri ve özel kumaşlar ile yapılan yıkamaların spektrofotmetrik ölçümlerin sonuçları ile dijital bir veri kütüphanesi tasarlanarak verileri insana kıyasla etkin ve inovatif kullanacak olan yapay zeka teknolojisine aktarmak hedeflenmektedir. Geliştirilecek olan yapay zeka destekli simülasyon ile çeşitli çamaşır deterjanı formülasyonları oluşturulacak olup, laboratuvarda deney ve performans testi ihtiyacını minimuma düşürecektir. Proje aynı zamanda makine öğrenmesi ile tanımlanan yeni verilere göre öngörü sistemini iyleştirebilir olacaktır.

Proje çıktısı sayesinde beklenen hedefe en yakın formülasyonların hızlı bir şekilde sunulmasıyla kimyasal ve su tüketimini azaltılması ile sürdürülebilirlik kapsamında ve ekonomik fayda sağlayacaktır.

Smart Monitoring of Human Motion and Activities in The Production Environment

Funded by: Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2022-2024
Principal Investigator: Y. Yemez

Read the abstract

Tofaş Montaj Üretim Müdürlüğü hatlarında, üretim sırasında araç mix değişikliği, araç çeşitliliği ve proses çeşitliliği gibi nedenlerle işçilik kayıpları oluşmaktadır. Araçlar, bulundukları istasyonlarda versiyon ve opsiyon farklılıkları nedeniyle farklı çevrim sürelerine sahiptir.

Projenin amacı, yapay zeka teknolojilerinin kullanımı ile yukarıda bahsedilen tüm sorunları gerçek zamanlı olarak hattan alınan görüntülerle tespit etmek ve ilgililerin süreçleri daha verimli yönetmesini sağlamak üzere analiz oluşturmaktır.

SynergyNet: Energy Internet with Blockchain, Smart Contracts and Federated Learning

Funded by: Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2022-2025
Principal Investigator: Ö.Özkasap

Read the abstract

Internet of Energy is an innovative and effective approach that integrates the concept of smart network and Internet technology. Unlike traditional centralized energy systems, distributed Energy Internet system with multiple components and communication requirements needs innovative technologies for reliability and efficiency. Emerging and promising, distributed blockchain, smart contracts, and distributed federated learning technologies offer new opportunities for decentralized Energy Internet systems. Our objective in the SynergyNet project is to develop effective system models, techniques and algorithms by applying innovative distributed blockchain, smart contract and distributed federated learning principles to key research problems and areas within the Energy Internet.   SynergyNet project is funded by TÜBİTAK 2247-A National Research Leaders program research grant. Fully funded PhD student and Postdoctoral researcher positions are available.

Pioneering a New Path in Parallel Programming Beyond Moore’s Law

Funded by: European Commission
Dates: 2021-2026
Principal Investigator: D. Unat

Read the abstract

BEYONDMOORE addresses the timely research challenge of solving the software side of the Post Moore crisis. The techno-economical model in computing, known as the Moore’s Law, has led to an exceptionally productive era for humanity and numerous scientific discoveries over the past 50+ years. However, due to the fundamental limits in chip manufacturing we are about to mark the end of Moore’s Law and enter a new era of computing where continued performance improvement will likely emerge from extreme heterogeneity. The new systems are expected to bring a diverse set of hardware accelerators and memory technologies. Current solutions to program such systems are host-centric, where the host processor orchestrates the entire execution. This poses major scalability issues and severely limits the types of parallelism that can be exploited. Unless there is a fundamental change in our approach to heterogeneous parallel programming, we risk substantially underutilizing upcoming systems. BEYONDMOORE offers a way out of this programming crisis and proposes an autonomous execution model that is more scalable, flexible, and accelerator-centric by design. In this model, accelerators have autonomy; they compute, collaborate, and communicate with each other without the involvement of the host. The execution model is powered with a rich set of programming abstractions that enable a program to be modeled as a task graph. To efficiently execute this task graph, BEYONDMOORE will develop a software framework that performs static and dynamic optimizations, issues accelerator-initiated data transfers, and reasons about parallel execution strategies that exploit both processor and memory heterogeneity. To aid the optimizations, a comprehensive cost model that characterizes both target applications and emerging architectures will be devised. Complete success of BEYONDMOORE will enable continued progress in computing which in turn will power science and technology in the life after Moore’s Law.

Seeing Through Events: End-to-End Approaches to Event-Based Vision Under Extremely Low-Light Conditions

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2025
Principal Investigator: E. Erdem, (CI) A. Erdem, (CI) F. Güney

Read the abstract

Event camera technology, developed and improved over the past decade, represents a paradigm shift in how we acquire visual data. In contrast to standard cameras, event cameras contain bio-inspired vision sensors that asynchronously respond to relative brightness changes in the scene for each pixel in the camera array and instead produce a sequence of “events” generated at a variable rate at certain times. Hence, they provide very high temporal resolution (in the order of microseconds), high dynamic range, low power consumption and no motion blur. However, because they adopt a fundamentally different design, processing their outputs and unlocking their full potential also require radically new methods. The goal of our project is to contribute to the newly emerging and so-called field of event-based vision.

When compared to event cameras, yet another crucial drawback of traditional cameras is their inability to deal with low-light conditions, which is usually dealt with by employing a longer exposure time in order to allow more light in. This is, however, problematic if the scene to be captured involves dynamic objects or when the camera is in motion, which results in blurry regions. To this end, our project will explore ways to take advantage of event data to improve standard cameras. More specifically, we will investigate enhancing the quality of dark videos as well as accurately estimating optical flow under extremely low-light conditions with the guidance of complementary event data. Toward these goals, we will explore novel deep architectures for constructing intensity images from events and also collect new synthetic and real video datasets to effectively train our models and better test their capabilities.

Our project will provide novel ways to process event data using deep neural networks and will offer hybrid approaches to bring traditional cameras and event cameras together to solve crucial challenges we face when capturing and processing videos in dark. The neural architectures that will be explored in this research project can also be applied to other event-based computer vision tasks. Moreover, as we start to see commercially available high resolution event sensors, we believe that, beyond its scientific impact, our project has also a potential to be commercialized as part of camera systems for future smartphones, mobile robots or autonomous vehicles of any kind.

Explainable DL Approaches for Image/Video Repair and Compression

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2024
Principal Investigator: M.A. Tekalp

Read the abstract


Quality Assessment of 360-Degree Videos Guided by Audio-Visual Saliency

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2024
Principal Investigator: A. Erdem

Read the abstract


Vera: Data Movement Detectives

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2024
Principal Investigator: D. Unat

Read the abstract

VERA aims to develop diagnostic tools for data movement, which is the main source of performance and energy inefficiency in parallel software. Technological advances and big data have increased the importance of data and data has become more critical than computation in terms of both energy consumption and performance in a software. Therefore, there is a need for performance tools that automatically detect and measure data movement in the memory hierarchy and between cores.

VERA will develop data movement tools that are much faster, much more comprehensive, much more scalable and highly accurate than previous studies that track and analyze data in parallel programs.

Cardiovascular Stress Impacts on Neuronal Function: Intracellular Pathways to Cognitive Impairment

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2024
Principal Investigator: A. Gürsoy

Read the abstract

Vascular Cognitive Impairment (vCI) is known to be tightly linked to cardiovascular disease (CVD). The main purpose of the CardioStressCI project is to identify and validate causative mechanisms connecting both conditions. We will use an interdisciplinary approach that combines in vitro research with bioinformatics, systems biology modeling and clinical database analysis. We will use network-based disease gene prioritization algorithms to rank the relevance of genes in CI and CVD, and correlate the results to establish interaction networks, which will be modeled using systems biology approaches. Predictions will be validated experimentally with human samples and cell and animal models, to investigate and confirm how individual components of these networks may influence the responses to the different CVD pathological stresses that lead to CI. The main aims of CardioStressCI are: i) to identify proteins linked to CI and CVD; ii) to establish the contribution of nitro-oxidative stress to CVD and CI; iii) to study how CVD induces neuronal dysfunction; iv) to elucidate the pattern of the inflammasome activation in CI and CVD; v) to determine vCI biomarkers. Our studies will consider gender aspects, age, and socio-economic and lifestyle factors as potential modulators of CI pathophysiology. We aim at increasing the knowledge of the molecular mechanisms that contribute to CI when CVD happens. Such knowledge can inform new directions to potentially improve diagnosis, prevention and new therapeutic targets against CI, and even CVD preventing its consequences in brain function.

Deep Learning-Enabled Crowd Density Estimation for Cell Analysis in Digital Pathology and Characterization of Homologous Recombination Deficiency in High-Grade Ovarian Serous Carcinoma

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2024
Principal Investigator: Ç. Gündüz Demir

Read the abstract


Shape-Preserving Deep Neural Networks for Instance Segmentation in Medical Images

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2023
Principal Investigator: Ç. Gündüz Demir

Read the abstract


Fully Convolutional Networks for Semantic Segmentation Using 3D Fractal and Poincare Maps

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2023
Principal Investigator: Ç. Gündüz Demir

Read the abstract


Perception Based Sketch Processing

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2021-2023
Researchers: M. Sezgin (PI), E. Dede

Read the abstract

Sketching is a natural and intuitive means of communication for expressing a concept or an idea. Sketches have various application areas such as problem-solving, design, and art. The increase in the usage of touch and pen-based devices has enabled sketches to be used in human-computer interaction and made sketch recognition an active research area. Our team believes that perception-based models can contribute to the sketch recognition task To be able to integrate perception into our model, we approach the sketch recognition problem with an interdisciplinary perspective.


Diagnostic Tools for Communication Pathologies in Parallel Architectures

Funded by: Newton Fund
Dates: 2021-2023
Principal Investigator: D. Unat

Read the abstract


Video Understanding for Autonomous Driving

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2020-2023
Principal Investigator: F. Güney

Read the abstract


The Road Less Travelled: One-Shot Learning of Rare Events in Autonomous Driving

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2020-2023
Principal Investigator: F. Güney

Read the abstract


SparCity: An Optimization and Co-Design Framework for Sparse Computation

Funded by: European Commission (H2020-JTI-EuroHPC-2019)
Dates: 2021-2023
Principal Investigator: D. Unat

Read the abstract

Perfectly aligned with the vision of the EuroHPC Joint Undertaking, the SparCity project aims at creating a supercomputing framework that will provide efficient algorithms and coherent tools specifically designed for maximising the performance and energy efficiency of sparse computations on emerging HPC systems, while also opening up new usage areas for sparse computations in data analytics and deep learning.  The framework enables comprehensive application characterization and modeling, performing synergistic node-level and system-level software optimizations. By creating a digital SuperTwin, the framework is also capable of evaluating existing hardware components and addressing what-if scenarios on emerging architectures and systems in a co-design perspective. To demonstrate the effectiveness, societal impact, and usability of the framework, the SparCity project will enhance the computing scale and energy efficiency of four challenging real-life applications that come from drastically different domains, namely, computational cardiology, social networks, bioinformatics and autonomous driving. By targeting this collection of challenging applications, SparCity will develop world-class, extreme scale and energy-efficient HPC technologies, and contribute to building a sustainable exascale ecosystem and increasing Europe’s competitiveness.

Exact dynamics of online and distributed learning algorithms for large-scale nonconvex optimization problems

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2020-2023
Principal Investigator: Z. Doğan

Read the abstract

We are experiencing a data-driven revolution at the moment with data being collected at an unprecedented rate. In particular, there is an increasing excitement toward autonomous systems with learning capabilities. Several data-driven applications have already shown significant practical benefit revealing the power of having access to more data, e.g., health care systems, self-driving cars, instant machine translation, and recommendation systems. However, large acceptance of such systems heavily depends on their stability, tractability and reproducibility, where current applications fall inadequate in providing such features. The scale and complexity of modern datasets often render classical data processing techniques infeasible, and therefore, several new algorithms are required to address new technical challenges associated with the nature of the data.

This project focuses on developing efficient and tractable solutions for large-scale learning problems encountered in machine learning and signal processing. Apart from theoretical aspects, the project bears specific goals targeted to applications in principal subspace estimation, low-rank matrix factorization, tensor decomposition and deep learning for largescale systems. Specifically, this novel approach brings together several attractive features:

  • The emerging concept of online-learning will be adapted to a distributed setting across a decentralized network topology.
  • The exact dynamics of the algorithms will be extracted by a stochastic process analysis method; which current state-of-the-art methods are not able to deliver.
  • Studying the extracted dynamics, the learning capabilities and performances of large-scale systems will be improved to match the current needs and challenges of the modern data-driven applications.

Analysis of training dynamics on artificial neural networks using methods of non-equilibrium thermodynamics

Funded by: the Scientific and Technological Research Council of Türkiye – TÜBİTAK
Dates: 2020-2022
Researchers: A. Kabakçıoğlu (PI) and D. Yuret

Read the abstract

The interface between physics and machine learning is a fast-growing field of research. While most studies in this frontier involve using deep learning methods to extract physical knowledge from experimental data or theoretical models, relatively little is known about the nontrivial dynamics of training on artificial neural networks. The analytical framework of nonequilibrium physics developed in the last two decades provides versatile tools that find a novel application in this context. The dynamics of machine learning displays some interesting features not found in physical systems, such as a nontrivial noise structure and a resulting non-thermal steady state, as we observed in our preliminary investigations. The proposed study aims to apply the know-how existing in the nonequilibrium physics literature to this modern problem and explore the implications of various universal laws (originally devised for microscopic systems and expressed in the language of statistical physics) for machine learning. We plan to employ well-known machine learning problems, such as MNIST or CIFAR, as well as some toy models as the testground for analytical predictions. The research team is composed of Dr. Deniz Yuret (Koç Univ) who is an expert in machine learning and the developer of the deep learning package (Knet) for the increasingly popular Julia platform, Dr. Michael Hinczewski (CWRU, USA) who has made important contributions to the literature on nonequilibrium aspects of biological systems, and Dr. Alkan Kabakçıoğlu (Koç Univ, PI), a computational statistical physicist whose recent studies focus on fluctuations and scaling properties of nonequilibrium processes in biomolecules. The proposed research will be conducted in the Department of Physics at Koç University and is expected to last two years.

Video Understanding for Autonomous Driving

Funded by:  European Research Commission
Dates: 2020-2022
Researchers: F. Güney (PI) and D. Yuret

Read the abstract

The researcher Dr. Fatma Guney will carry out a fellowship to create the technology needed to understand the content of videos in a detailed, human-like manner, superseding the current limitations of static image understanding methods, and enabling more robust perception for autonomous driving agents. This fellowship will be carried out at Koc University under the supervision of Prof. Deniz Yuret. Understanding surrounding scene with a detailed and human-level reliability is essential to address complex situations in autonomous driving. State-of-the-art machine vision systems are very good at analysing static images by detecting and segmenting objects on a single image but relating them across time in a video still remains a challenge. Our goal in this proposal is to extend the success of static image understanding to the temporal domain by equipping machines with the ability to interpret videos by exploiting both appearance and motion cues with a human-like ability. There are several challenges which make it difficult for machines such as cluttered backgrounds, the variety and complexity of motion, and partial occlusions due to people interacting with each other.