August 14, 2023

Maks Ovsjanikov, École Polytechnique

Efficient and Robust Learning on Non-Rigid Surfaces and Graphs

In this talk, I will describe several approaches for learning on curved surfaces undergoing non-rigid deformations. First, I will give a brief overview of intrinsic convolution methods, and then present a different approach based on learned diffusion. The key properties of this approach are its robustness to changes in discretization and efficiency in enabling long-range communication, both in terms of memory and time. I will then showcase several applications, ranging from RNA surface segmentation to non-rigid shape correspondence, and present a recent extension for learning on graphs.

July 18, 2023

Hwee Kuan Lee,  A*STAR Singapore

AI Driven Molecular Simulations

Accurate simulations of molecules play an important role in material design, drug discovery and industrial chemical processing. In the field of condensed matter physics, molecular simulations enable us to understand critical phenomena. However simulations of molecules at a large scale using conventional differential equation integrators is limited by the long simulation times. For example, at current, we are off my about 2 orders of magnitude in time scale for protein simulations. In this seminar, we will discuss several techniques of using Deep Neural Networks to learn the dynamics of molecules and hence circumvent the need for integrating equations of motions. Deep Learning techniques can generally speed up simulations by 10x to 100x while maintaining accuracies.

May 4, 2023

 Yi-Zhe Song, University of Surrey

“Sketching” into the future of AI

Humans sketch, from historic times in caves, to the time being by scribbling on phones and tablets. As Artificial Intelligence (AI) learns to see and perceive the world around us (aka computer vision), understanding how humans sketch plays an important and fundamental role in casting insights into the human visual system, and in turn informing AI model designs. This talk is all about sketches, summarising over a decade of research from the SketchX Research Lab at the University of Surrey. By the end, it hopes to convey how sketch research can inform the future of AI, both in terms of fundamental theory and applications that could revolutionise the status quo.

May 2, 2023

Jacob Andreas, MIT

Language Models as World Models

The extent to which language modeling induces representations of the world outside text—and the broader question of whether it is possible to learn about meaning from text alone—have remained a subject of ongoing debate across NLP and cognitive sciences. I’ll present two studies from my lab showing that transformer language models encode structured and manipulable models of situations in their hidden representations. I’ll begin by presenting evidence from *semantic probing* indicating that LM representations of entity mentions encode information about entities’ dynamic state, and that these state representations are causally implicated downstream language generation. Despite this, even today’s largest LMs are prone to glaring semantic errors: they hallucinate facts, contradict input text, or even their own previous outputs. Building on our understanding of how LMs build models of entities and events, I’ll present a *representation editing* model called REMEDI that can correct these errors directly in an LM’s representation space, in some cases making it possible to generate output that cannot be produced with a corresponding textual prompt, and to detect incorrect or incoherent output before it is generated.

April 4, 2023

Anna Ivanova (MIT) & Kyle Mahowald (UT at Austin)

The Difference between Language and Thought

Today’s large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text. This achievement has led to speculation that these models are—or will soon become—“thinking machines”, capable of performing tasks that require abstract knowledge and reasoning. In this talk, I will argue that, when evaluating LLMs, we should distinguish between their formal linguistic competence—knowledge of linguistic rules and patterns—and functional linguistic competence—understanding and using language in the world. This distinction stems from modern neuroscience research, which shows that these skills recruit different mechanisms in the human brain. I will show that, although LLMs are close to mastering formal linguistic competence, they still fail at many functional competence tasks, which require drawing on various non-linguistic cognitive skills. Finally, I will discuss why we humans are so tempted to mistake fluent speech for fluent thought.

March 21, 2023

Peter Dueben  from European Centre for Medium Range Weather Forecasts (ECMWF)

Machine Learning for Weather and Climate Prediction

This talk will provide an overview on the state-of-the-art in machine learning in Earth system science. It will outline how conventional weather and climate models and machine learned models will co-exist in the future, and the challenges that need to be addressed when building the best machine learning forecast systems.t behavior in air combat.

March 14, 2023

Nazım Kemal Üre, ITU

Reinforcement Learning for Solving High Complexity Decision-Making Problems

Reinforcement learning (RL) has attracted significant interest in both academia and industry in recent years. The main premise of RL is the ability to control a system efficiently, without requiring any prior knowledge of the dynamics of the system. That being said, using RL as an out of the box approach only works for relatively simple problems with well-defined episodic structures, small number of actions and dense reward signals. On the other hand, many real-world problems possess extremely delayed reward signals, gigantic action spaces and non-episodic dynamics. In this talk, we will show that such high complexity decision making problems can be solved by wrapping RL algorithms with other powerful machine learning techniques, such as curriculum learning, hierarchical decompositions and imitation learning. We will demonstrate the potential of these methods across three different use cases; i) autonomous driving in urban environments, ii) playing real-time strategy games and iii) cloning fighter pilot behavior in air combat.

Feb 28, 2023

Iryna Gurevych,  TU Darmstadt

InterText: Modelling Text as a Living Object in Cross-Document Context

Digital texts are cheap to produce, fast to update, easy to interlink, and there are a lot of them. The ability to aggregate and critically assess information from connected, evolving texts is at the core of most intellectual work – from education to business and policy-making. Yet, humans are not very good at handling large amounts of text. And while modern language models do a good job at finding documents, extracting information from them and generating natural-sounding language, the progress in helping humans read, connect, and make sense of interrelated texts has been very much limited.Funded by the European Research Council, the InterText project brings natural language processing (NLP) forward by developing a general framework for modelling and analysing fine-grained relationships between texts – intertextual relationships. This crucial milestone for AI would allow tracing the origin and evolution of texts and ideas and enable a new generation of AI applications for text work and critical reading. Using scientific peer review as a prototypical model of collaborative knowledge construction anchored in text, this talk will present the foundations of our intertextual approach to NLP, from data modelling and representation learning to task design, practical applications and intricacies of data collection. We will discuss the limitations of the state of the art, report on our latest findings and outline the open challenges on the path towards general-purpose AI for fine-grained cross-document analysis of texts.