Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Learn What Is Possible, Then Choose What Is Best: Disentangling One-To-Many Relations in Language Through Text-based Games

Published in FINDINGS-EMNLP, 2022

Language models pre-trained on large self-supervised corpora, followed by task-specific fine-tuning has become the dominant paradigm in NLP. These pre-training datasets often have a one-to-many structure—e.g. in dialogue there are many valid responses for a given context. However, only some of these responses will be desirable in our downstream task. This raises the question of how we should train the model such that it can emulate the desirable behaviours, but not the undesirable ones. Current approaches train in a one-to-one setup—only a single target response is given for a single dialogue context—leading to models only learning to predict the average response, while ignoring the full range of possible responses. Using text-based games as a testbed, our approach, PASA, uses discrete latent variables to capture the range of different behaviours represented in our larger pre-training dataset. We then use knowledge distillation to distil the posterior probability distribution into a student model. This probability distribution is far richer than learning from only the hard targets of the dataset, and thus allows the student model to benefit from the richer range of actions the teacher model has learned. Results show up to 49% empirical improvement over the previous state-of-the-art model on the Jericho Walkthroughs dataset.

Recommended citation: Learn What Is Possible, Then Choose What Is Best: Disentangling One-To-Many Relations in Language Through Text-based Games (Towle & Zhou, Findings 2022). http://academicpages.github.io/files/2022-findings-emnlp.pdf

Model-Based Simulation for Optimising Smart Reply

Published in ACL, 2023

Smart Reply (SR) systems present a user with a set of replies, of which one can be selected in place of having to type out a response. To perform well at this task, a system should be able to effectively present the user with a diverse set of options, to maximise the chance that at least one of them conveys the user’s desired response. This is a significant challenge, due to the lack of datasets containing sets of responses to learn from. Resultantly, previous work has focused largely on post-hoc diversification, rather than explicitly learning to predict sets of responses. Motivated by this problem, we present a novel method SimSR, that employs model-based simulation to discover high-value response sets, through simulating possible user responses with a learned world model. Unlike previous approaches, this allows our method to directly optimise the end-goal of SR–maximising the relevance of at least one of the predicted replies. Empirically on two public datasets, when compared to SoTA baselines, our method achieves up to 21% and 18% improvement in ROUGE score and Self-ROUGE score respectively.

Recommended citation: Model-Based Simulation for Optimising Smart Reply (Towle & Zhou, ACL 2023). http://academicpages.github.io/files/2023-acl.pdf

End-to-End Autoregressive Retrieval via Bootstrapping for Smart Reply Systems

Published in FINDINGS-EMNLP, 2023

Reply suggestion systems represent a staple component of many instant messaging and email systems. However, the requirement to produce sets of replies, rather than individual replies, makes the task poorly suited for out-of-the-box retrieval architectures, which only consider individual message-reply similarity. As a result, these system often rely on additional post-processing modules to diversify the outputs. However, these approaches are ultimately bottlenecked by the performance of the initial retriever, which in practice struggles to present a sufficiently diverse range of options to the downstream diversification module, leading to the suggestions being less relevant to the user. In this paper, we consider a novel approach that radically simplifies this pipeline through an autoregressive text-to-text retrieval model, that learns the smart reply task end-to-end from a dataset of (message, reply set) pairs obtained via bootstrapping. Empirical results show this method consistently outperforms a range of state-of-the-art baselines across three datasets, corresponding to a 5.1\%-17.9\% improvement in relevance, and a 0.5\%-63.1\% improvement in diversity compared to the best baseline approach. We make our code publicly available.

Recommended citation: End-to-End Autoregressive Retrieval via Bootstrapping for Smart Reply Systems (Towle & Zhou, FINDINGS-EMNLP 2023). http://academicpages.github.io/files/2023-findings-emnlp.pdf

talks

teaching

Programming and Algorithms (COMP1005)

Undergraduate course, University of Nottingham, School of Computer Science, 2023

This module covers basic programming principles using the C programming language. Topics covered include: types, variables, expressions, control structures, functions and data structures. Students also learn the fundamentals of software development, including documentation, testing, debugging and version control.

Systems and Architecture (COMP1006)

Undergraduate course, University of Nottingham, School of Computer Science, 2023

This module considers how computers operate and can be programmed at the lowest level (i.e. through assembly language).

Computer Fundamentals (COMP1007)

Undergraduate course, University of Nottingham, School of Computer Science, 2023

This module considers how computers work and can be built from scratch using digital logic.

Professional Ethics in Computing (COMP3020)

Undergraduate course, University of Nottingham, School of Computer Science, 2023

This module considers the ethical dimension of various computer science verticals such as privacy, security, AI and the environment. The module is delivered through a combination lectures and workshops.

Human-AI Interaction (COMP3074)

Undergraduate course, University of Nottingham, School of Computer Science, 2023

This module considers how to build interactive AI systems in the context of natural language and speech processing.