Development and analysis of new generation ML algorithms:
Current ML approaches based on deep learning (DL) has achieved super human performance especially in pattern recognition related tasks. However, current DNNs require substantial amount of labeled data for proper training, and the existing ML platforms are both bulky and inefficient in energy usage compared to the natural solutions, and the theory typically lags behind the practice, where several successful approaches remain unjustified. In our research, we are working on developing novel ML frameworks to address all of these concerns:
Development of software platforms and resources for ML:
One of the key ingredients enablers of the current AI revolution is the use of appropriate software tools and platforms for the efficient implementation of ML algorithms utilizing advanced hardware components. Building ML software that enable fast and low complexity implementations for handling large scale data is one of the main focuses of our research interest. Knet, Koç University Deep Learning Platform, developed in our university, is the unique ML platform that is developed in Türkiye and that can compete with its popular versions such as Tensorflow and PyTorch. We believe that it is really critical to have the full command of all layers of software implementation for further innovations and applications in ML field.
Computational neuroscience and biologically inspired learning algorithms:
Although it is debatable whether the human brain is an example of general-intelligence, it is undoubtedly still the best well known intelligent device with diverse cognitive abilities, which is a product of hundreds of millions of years of natural optimization process. We believe that joining the global efforts to model inner workings of the brain would also be fruitful for inspiring novel algorithms with better performance and lower implementational requirements. Although a global theory for brain is still missing, and it is still mostly a black box, it is clear that even the rough modeling of biological neural networks led to the existing powerful DNN structures. Therefore, it is both exciting and informative to investigate different neuron models, signaling and network structures, different resolutions of learning time scales and relevant physical, statistical and optimization frameworks within the scope of biological intelligence. In the alternative direction, we target to utilize the algorithmic tools developed in ML for the analysis of neural imaging data such EEG, MEG, fMRI, EMG and Calcium imaging. Such efforts would be also useful for diseases of neural origin such as Alzheimer’s and Epilepsy.
Rich and explainable DL
Explainability of DL algorithms is essential to increase the reliability and, thus, enable the deployment of ML especially in safety-critical applications. Understanding and explaining the learning behavior, providing mathematically sound uncertainty bounds, and increasing the robustness, e.g. against adversarial examples, are major milestones. At the application-front, there is an increasing excitement toward autonomous systems with learning capabilities. Several data-driven applications have shown significant practical benefit revealing the power of having access to more data, e.g., health care systems, and self-driving cars. However, large acceptance of such systems yet depends on their stability, tractability and reproducibility, of these systems where current applications fall inadequate in providing such features. The scale and complexity of modern datasets often render classical techniques infeasible, and therefore, new algorithms are required to address new technical challenges associated with the nature of the data.
Tractable learning methods for non-convex optimization
Despite remarkable empirical success of non-convex optimization frameworks in data-driven field, mainly thanks to popular DL models, theoretical understanding of them is very limited. Although the so-called ‘black-box’ approach helps many to initiate a significant interest into this emerging field, but a rigorous approach is yet missing. Hence, in order to advance the field to its next frontier, a ‘black-box’ treatment is now known to limit continuous development of these systems. Our research focuses on developing tractable DL solutions for large-scale systems. Apart from theoretical aspects, we target specific applications in subspace estimation, matrix factorization, tensor decomposition and DL. Specifically, this novel approach brings together several attractive features: