Osman Batur İnce, PhD student in Computer Science and Engineering

Can you briefly introduce yourself and share your background in AI research?

Hi, I am Osman Batur İnce. I am a KUIS AI Fellow and 2nd year PhD student in Koç University COMP department. I graduated with a BSc in CS from Bilkent University in 2022. I am lucky to work with professors Aykut Erdem and Erkut Erdem on NLP and multimodal learning.

 

What initially sparked your interest in the field of AI? Was there a particular moment or experience that inspired you to pursue this area of study?

I had a vague interest when I was a junior student in Bilkent. But, I wanted to explore mobile development first. So, I implemented multiple apps during my first two Bilkent years. After talking with a childhood friend of mine, we operated a small-scale fintech startup where I focused on mobile development and web design. As I already experienced some full-stack development, I thought that experiencing research was crucial in my career selection. As I liked NLP from Bilkent’s machine learning course, I applied to a professor to do NLP research. I had more fun than I thought that I would, so here we are :).

 

Could you tell us about your current research or thesis topic in AI? What motivated you to choose this specific area?

My research has multiple facets. My main topic is compositional generalization. While models like ChatGPT show fascinating performance in a lot of tasks, they mainly rely on superficial phrase-level statistics rather than algorithmic comprehension. For example, they do not show human-level compositional generalization, the ability to create novel compositions by combining known parts. I think that generalization, multimodal learning, and interpretability of these models are some of the most critical desiderata for current language models. Thus, my research focuses on compositional generalization, multimodal learning, and their intersection. 

 

What are some of the key challenges you’ve encountered during your research? How have you been able to overcome them?

Doing research is a different paradigm compared to taking courses and getting good grades. You need to read papers to stay up-to-date, solve apparent or non-apparent bugs, ponder research questions, and formulate solutions for them. I try to overcome these challenges by reading advice from influential researchers and listening to their talks. For me, the best way to overcome them is to dive into the problem and attack each subproblem one by one. Often, things work out. When they do not, resting a bit helps and being resilient. Then, rinse and repeat :).

 

What excites you the most about the potential applications of AI in the real world? Are there any specific domains or industries where you believe AI can make a significant impact?

I think that AI will contribute to almost every domain and industry in the world. Some exceptions seem to be construction, hairdressing, etc. But for example, AI will be our first medical consultant as opposed to family doctors. Similar analogies can apply to other domains like law, finance, insurance, education, psychology, etc. 

 

What are some of the recent advancements or breakthroughs in AI that you find particularly fascinating or promising? How do you think these advancements can contribute to the overall progress of the field?

I think the parameter-efficient tuning of large language models (LLMs) is one of the most important breakthroughs. While the idea can be traced to 2019 or earlier, it found another important meaning with the release of open-source LLMs. Now, everyone can train their own language models for any task they want, and run their own models on standard consumer hardware. Platforms like HuggingFace make using these models and contributing to this field extremely simple. I think the open-source movement in AI is vital, and we should not let big companies create monopolies over this transformative technology.

 

What are your future career aspirations in the field of AI? Do you have any specific goals or areas of focus that you would like to pursue?

Honestly, I can be a research scientist in industry or an academician depending on what life brings :). As I love NLP, I do not have an extremely specific, pinpoint interest. Currently, I would like to investigate parameter-efficient fine-tuning techniques for compositional generalization. But, in the long run, I would like to research generalization, multimodal learning, and interpretability. 

 

Can you share a memorable experience or achievement from your AI research journey so far? What did you learn from that experience?

My first paper got accepted to Findings of EMNLP 2023 just a few months ago. We did not have great results before the open house in May 2023 but submitted the paper 1.5 months later. I learned a lot of technical details in training models and the literature, but the most important lesson was resilience. Sometimes, we just need to keep pushing on.

How do you balance your academic workload, personal life, and the demands of your AI research? Do you have any strategies or tips that have helped you maintain a healthy work-life balance?

In my first year, I certainly could not balance anything. While this year the balance is slightly better, I am still far from it. The minimum that I try to keep is going to the gym more than twice a week and keeping in touch with my friends and family. Unfortunately, I am not in a position to give anybody advice ‘:).