Tutorials

T1: Developments in Fair Resource Allocation

December 5, 2022

9AM - 12PM

This tutorial will introduce both classic and recent results on fair resource allocation. To start, we will outline prominent fairness notions when allocating divisible (commonly known as “cake cutting”) and indivisible resources and their properties in respective settings. Moving on, we will focus on recent developments in the line of work that combines both divisible and indivisible resources. First, we will discuss fair division with money, in which money can be viewed as a homogeneous divisible good. We will next touch on the axiomatic study of fairness when allocating a mix of divisible and indivisible resources. The last part of this tutorial will focus on matching algorithms and their properties. Specifically, we will examine important solution concepts and axiomatic properties such as individual rationality, Pareto efficiency, core stability, and strategyproofness. We will also explore fundamental matching algorithms including Kuhn’s algorithm, Gale’s TTC algorithm and the Deferred Acceptance algorithm.

T2: A Practical Guide to Knowledge Graph Construction from Technical Short Text

December 5, 2022

9AM - 12PM

Have you ever wondered how to harness the significant volume of knowledge buried within unstructured text? Approximately 80% of all data in organisations is unstructured, a large portion of which exists in the form of technical language such as doctor’s notes, maintenance work orders, and traffic reports. Natural Language Processing (NLP) provides the means to construct knowledge graphs from unstructured short text, enabling the querying of knowledge held within the text. Knowledge graphs are employed by a wide range of top companies – eBay, Walmart and Volvo to name a few. But what exactly is a knowledge graph? Why are leading companies actively building knowledge graphs, and how is one created?

This tutorial provides a practical guide to knowledge graphs. We will begin by providing an overview of graph databases, highlighting their unique advantages when compared to structured data models such as relational tables. We will then detail the underlying natural language processing techniques involved in knowledge graph construction from text, namely named entity recognition (NER) and relation extraction (RE). In the second half of the tutorial, we will motivate the need for knowledge graphs via a simple, practical example in the maintenance domain. This Python notebook-based example will demonstrate how noisy, unstructured text such as maintenance work orders can be transformed into a knowledge graph to visualise and query unstructured data and allow domain experts to make informed business decisions.

T3: Memory-based Reinforcement Learning

December 5, 2022

1.30PM - 4.30PM

Reinforcement learning (RL) is a branch of artificial intelligence wherein autonomous agents learn to maximise predefined rewards from the environment. Despite immense successes in breaking human records, the current training of RL agents is prohibitively expensive in terms of time, computing resources, and samples. For example, it requires trillions of playing sessions to reach human-level performance on simple video games. The problem of sample inefficiency is exacerbated in stochastic, partially observable, noisy or long-term real-world environments, whereas humans can show excellent performance under these circumstances without much training. That shortcoming of RL agents can be attributed to the lack of efficient human-like memory mechanisms that hasten learning by smartly utilising past observations. This tutorial presents recent advances in memory-based reinforcement learning where emerging memory systems enable sample-efficient, adaptive and human-like RL agents. The first part of the tutorial covers the basics of RL and raises the sample inefficiency issue. The second part presents a taxonomy of memory mechanisms that recent lean RL employs to reduce the number of training samples and resemble human memory. The subsequent three sections study the benefits that memory can provide to RL agents, which can be categorised as (1) Quick access to critical experiences; (2) A better representation of observation contexts; and (3) Intrinsic motivation to explore. Finally, the tutorial concludes with discussions on opening challenges and promising future research on memory-based RL.

T4: Spoken Language Understanding: Recent Advances and Future Direction

December 5, 2022

1.30PM - 4.30PM

When a human speaks to a machine how does the latter elicit meaning from the communication? This is an important AI task as it enables the machine to construct a sensible answer or perform a useful action for the human. Meaning is represented at the sentence level, identification of which is known as intent detection, and at the word level, a labelling task called slot filling. This dual level joint task requires innovative thinking about natural language and deep learning network design and as a result many approaches have been tried. In this tutorial we will discuss how the joint task is set up and introduce Spoken Language Processing (NLP) and Deep Learning basics. We will cover the datasets, experiments and metrics used in the field. We will describe how the machine uses the latest NLP and Deep Learning techniques to address the task, including recurrent and non-recurrent (attention based Transformer) networks and pre-trained models (e.g. BERT). We will then look in detail at a network that allows the two levels of the task to explicitly interact to boost performance. We will do a code walk through of a Python notebook for this model and attendees will have an opportunity to do some light coding tasks on this model to further their understanding.

Contact Us

Program & Submission Enquiries

Registration Enquiries

Julie Jerbic
Email