Luke Dickens - Academia.edu (original) (raw)
Papers by Luke Dickens
Expert Review of Medical Devices, Sep 1, 2013
Blood glucose control, e.g. in diabetes mellitus or severe illness, requires strict adherence to ... more Blood glucose control, e.g. in diabetes mellitus or severe illness, requires strict adherence to a protocol of food, insulin-administration and exercise personalized to each patient. An artificial pancreas for automated treatment could boost quality of glucose control and patientsí ndependence. The components required for an artificial pancreas are (1) continuous glucose monitoring (CGM), (2) smart controllers and (3) insulin pumps delivering the optimal amount of insulin. In recent years, medical devices for CGM and insulin administration have undergone rapid progression and are now commercially available. Yet, clinically available devices still require regular patient or care-givers attention as they operate in open-loop control with frequent user intervention. Dosage-calculating algorithms are currently being studied in intensive care patients , for short overnight control to supplement conventional insulin delivery , and for short periods where patients rest and follow a prescribed food regime. Fully-automated algorithms that can respond to the varying activity levels seen in outpatients, with unpredictable and unreported food intake, and which provide the necessary personalized control for individuals is currently beyond the state-of-the-art. Here we review and discuss reinforcement learning algorithms, controlling insulin in a closed-loop to provide individual insulin dosing regimens that are reactive to the immediate needs of the patient. Maintaining normoglycemia is one of the major challenges in the treatment of patients with diabetes mellitus. As a key performance indicator, an average plasma glucose concentration of glycosylated hemoglobin (HbA 1C) values < 7% was recommended by the American Diabetes Association and has been shown to reduce the development and progression of microvascular and cardiovascular complications by 76%. Conversely, treatment of hyperglycemia with insulin may lead to hypoglycemia that, in turn, may contribute to clinically relevant complications. This implies that calculation of precise insulin dosages is critical, must be individually adapted, and should be reactive to the patient's glucose level. Worldwide, more than 371 million people have to manage their diabetes with both constant glucose monitoring and insulin dosing that affect their quality of life. This has led to intensive research concerning the development of an artificial pancreas since the 1970s. Such a system is composed of three components: 1. continuous glucose monitoring (CGM) using an implanted sensor, 2. an insulin pump delivering insulin, and 3. an algorithm calculating the correct dose of insulin to be applied. Since the development of the first artificial pancreas system, major improvements have been made, but the system still needs development before it can be routinely used in clinical practice. To date, one of the major limitations of the successful use of automated dosing in clinical, as well as outpatient settings, is the demand for a flexible algorithm that adapts the artificial pancreas to the special needs of each single patient. In this article, we briefly describe the major challenges for development of an artificial pancreas system and discuss the application of machine learning algorithms as a potential approach to increase the flexibility of the system. Control strategies for insulin delivery 4 Lonnroth P, Jansson PA, Smith U. A microdialysis method allowing characterization of intercellular water space in humans.
Annual Information Security Symposium, Mar 30, 2010
arXiv (Cornell University), Jun 15, 2023
Whilst cooking is a very important human activity, there has been little consideration given to h... more Whilst cooking is a very important human activity, there has been little consideration given to how we can formalize recipes for use in a reasoning framework. We address this need by proposing a graphical formalization that captures the comestibles (ingredients, intermediate food items, and final products), and the actions on comestibles in the form of a labelled bipartite graph. We then propose formal definitions for comparing recipes, for composing recipes from subrecipes, and for deconstructing recipes into subrecipes. We also introduce and compare two formal definitions for substitution into recipes which are required when there are missing ingredients, or some actions are not possible, or because there is a need to change the final product somehow.
Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies, Sep 9, 2019
Early detection of cognitive decline is important for timely intervention and treatment strategie... more Early detection of cognitive decline is important for timely intervention and treatment strategies to prevent further deterioration or development of more severe cognitive impairment, as well as identify at risk individuals for research. In this paper, we explore the feasibility of using data collected from built-in sensors of mobile phone and gameplay performance in mobile-game-based cognitive assessments. Twenty-two healthy participants took part in the two-session experiment where they were asked to take a series of standard cognitive assessments followed by playing three popular mobile games in which user-game interaction data were passively collected. The results from bivariate analysis reveal correlations between our proposed features and scores obtained from paper-based cognitive assessments. Our results show that touch gestural interaction and device motion patterns can be used as supplementary features on mobile game-based cognitive measurement. This study provides initial evidence that game related metrics on existing off-the-shelf games have potential to be used as proxies for conventional cognitive measures, specifically for visuospatial function, visual search capability, mental flexibility, memory and attention. CCS Concepts: • Human-centered computing → Touch screens; Haptic devices; Ubiquitous and mobile computing design and evaluation methods; • Applied computing → Health informatics.
Behavior Research Methods, Feb 11, 2021
The various group and category memberships that we hold are at the heart of who we are. They have... more The various group and category memberships that we hold are at the heart of who we are. They have been shown to affect our thoughts, emotions, behavior, and social relations in a variety of social contexts, and have more recently been linked to our mental and physical well-being. Questions remain, however, over the dynamics between different group memberships and the ways in which we cognitively and emotionally acquire these. In particular, current assessment methods are missing that can be applied to naturally occurring data, such as online interactions, to better understand the dynamics and impact of group memberships in naturalistic settings. To provide researchers with a method for assessing specific group memberships of interest, we have developed ASIA (Automated Social Identity Assessment), an analytical protocol that uses linguistic style indicators in text to infer which group membership is salient in a given moment, accompanied by an in-depth open-source Jupyter Notebook tutorial (https://github.com/Identity-lab/Tutorial-on-salient-social-Identity-detection-model). Here, we first discuss the challenges in the study of salient group memberships, and how ASIA can address some of these. We then demonstrate how our analytical protocol can be used to create a method for assessing which of two specific group memberships-parents and feminists-is salient using online forum data, and how the quality (validity) of the measurement and its interpretation can be tested using two further corpora as well as an experimental study. We conclude by discussing future developments in the field.
International Journal of Approximate Reasoning, Nov 1, 2016
Probabilistic programming is an area of research that aims to develop general inference algorithm... more Probabilistic programming is an area of research that aims to develop general inference algorithms for probabilistic models expressed as probabilistic programs whose execution corresponds to inferring the parameters of those models. In this paper, we introduce a probabilistic programming language (PPL) based on abductive logic programming for performing inference in probabilistic models involving categorical distributions with Dirichlet priors. We encode these models as abductive logic programs enriched with probabilistic definitions and queries, and show how to execute and compile them to boolean formulas. Using the latter, we perform generalized inference using one of two proposed Markov Chain Monte Carlo (MCMC) sampling algorithms: an adaptation of uncollapsed Gibbs sampling from related work and a novel collapsed Gibbs sampling (CGS). We show that CGS converges faster than the uncollapsed version on a latent Dirichlet allocation (LDA) task using synthetic data. On similar data, we compare our PPL with LDA-specific algorithms and other PPLs. We find that all methods, except one, perform similarly and that the more expressive the PPL, the slower it is. We illustrate applications of our PPL on real data in two variants of LDA models (Seed and Cluster LDA), and in the repeated insertion model (RIM). In the latter, our PPL yields similar conclusions to inference with EM for Mallows models.
Artificial Intelligence, Oct 1, 2020
We propose the action language EPEC-Epistemic Probabilistic Event Calculus-that supports probabil... more We propose the action language EPEC-Epistemic Probabilistic Event Calculus-that supports probabilistic, epistemic reasoning about narratives of action occurrences and environmentally triggered events, and in particular facilitates reasoning about future belief-conditioned actions and their consequences in domains that include both perfect and imperfect sensing actions. To provide a declarative semantics for sensing and belief conditioned actions in a probabilistic, narrative setting we introduce the novel concept of an epistemic reduct. We then formally compare our language with two established frameworks for probabilistic reasoning about action-the action language PAL by Baral et al, and the extension of the situation calculus to reason about noisy sensors and e↵ectors by Bacchus et al. In both cases we prove a correspondence with EPEC for a class of domains representable in both frameworks.
Lecture Notes in Computer Science, 2017
We present PEC, an Event Calculus (EC) style action language for reasoning about probabilistic ca... more We present PEC, an Event Calculus (EC) style action language for reasoning about probabilistic causal and narrative information. It has an action language style syntax similar to that of the EC variant Modular-E. Its semantics is given in terms of possible worlds which constitute possible evolutions of the domain, and builds on that of Epistemic Functional EC (EFEC). We also describe an ASP implementation of PEC and show the sense in which this is sound and complete.
Human defined concepts are inherently transferable, but it is not clear under what conditions the... more Human defined concepts are inherently transferable, but it is not clear under what conditions they can be modelled effectively by non-symbolic artificial learners. This paper argues that for a transferable concept to be learned, the system of relations that define it must be coherent across domains and properties. That is, they should be consistent with respect to relational constraints, and this consistency must extend beyond the representations encountered in the source domain. Further, where relations are modelled by differentiable functions, their gradients must conform-the functions must at times move together to preserve consistency. We propose a Partial Relation Transfer (PRT) task which exposes how well relation-decoders model these properties, and exemplify this with ordinality prediction transfer task, including a new data set for the transfer domain. We evaluate this on existing relation-decoder models, as well as a novel model designed around the principles of consistency and gradient conformity. Results show that consistency across broad regions of input space indicates good transfer performance, and that good gradient conformity facilitates consistency.
arXiv (Cornell University), Nov 13, 2020
We propose a new model for relational VAE semi-supervision capable of balancing disentanglement a... more We propose a new model for relational VAE semi-supervision capable of balancing disentanglement and low complexity modelling of relations with different symbolic properties. We compare the relative benefits of relation-decoder complexity and latent space structure on both inductive and transductive transfer learning. Our results depict a complex picture where enforcing structure on semi-supervised representations can greatly improve zero-shot transductive transfer, but may be less favourable or even impact negatively the capacity for inductive transfer.
arXiv (Cornell University), Mar 7, 2019
Nonlinear optimal control problems are often solved with numerical methods that require knowledge... more Nonlinear optimal control problems are often solved with numerical methods that require knowledge of system's dynamics which may be difficult to infer, and that carry a large computational cost associated with iterative calculations. We present a novel neurobiologically inspired hierarchical learning framework, Reinforcement Learning Optimal Control, which operates on two levels of abstraction and utilises a reduced number of controllers to solve nonlinear systems with unknown dynamics in continuous state and action spaces. Our approach is inspired by research at two levels of abstraction: first, at the level of limb coordination human behaviour is explained by linear optimal feedback control theory. Second, in cognitive tasks involving learning symbolic level action selection, humans learn such problems using model-free and model-based reinforcement learning algorithms. We propose that combining these two levels of abstraction leads to a fast global solution of nonlinear control problems using reduced number of controllers. Our framework learns the local task dynamics from naive experience and forms locally optimal infinite horizon Linear Quadratic Regulators which produce continuous low-level control. A top-level reinforcement learner uses the controllers as actions and learns how to best combine them in state space while maximising a long-term reward. A single optimal control objective function drives high-level symbolic learning by providing training signals on desirability of each selected controller. We show that a small number of locally optimal linear controllers are able to solve global nonlinear control problems with unknown dynamics when combined with a reinforcement learner in this hierarchical framework. Our algorithm competes in terms of computational cost and solution quality with sophisticated control algorithms and we illustrate this with solutions to benchmark problems.
Research Square (Research Square), Oct 4, 2022
In zooarchaeology, animal bones are normally identified using comparative macromorphological meth... more In zooarchaeology, animal bones are normally identified using comparative macromorphological methods, which involve visual comparison of the bone with reference materials. However, recent work has oppugned the reliability of these methods. Although previous studies applying macro-morphological methods to identify sheep and
arXiv (Cornell University), Jun 21, 2022
Recent efforts in interpretable deep learning models have shown that concept-based explanation me... more Recent efforts in interpretable deep learning models have shown that concept-based explanation methods achieve competitive accuracy with standard end-to-end models and enable reasoning and intervention about extracted high-level visual concepts from images, e.g., identifying the wing color and beak length for bird-species classification. However, these concept bottleneck models rely on a necessary and sufficient set of predefined concepts-which is intractable for complex tasks such as video classification. For complex tasks, the labels and the relationship between visual elements span many frames, e.g., identifying a bird flying or catching prey-necessitating concepts with various levels of abstraction. To this end, we present CoDEx, an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification. CoDEx identifies a rich set of complex concept abstractions from natural language explanations of videos-obviating the need to predefine the amorphous set of concepts. To demonstrate our method's viability, we construct two new public datasets that combine existing complex video classification datasets with short, crowd-sourced natural language explanations for their labels. Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
This work investigates the implicit discounting that humans use to compare rewards that may occur... more This work investigates the implicit discounting that humans use to compare rewards that may occur at different points in the future. We show that the way discounting is applied is not constant, but changes depending on context and in particular can be influenced by the apparent complexity of the environment. To investigate this, we conduct a series of neurophysics experiments, in which participants perform discrete-time, sequential, 2AC tasks with non-episodic characteristics and varying reward structure. The varying rewards in our games cause participants behaviour to change giving a characteristic signal of their future reward discounting. Model-free, model-based and hybrid reinforcement learning models are fit to participant data, as well as a lighter weight model which does not assume a learning mechanism. Results show that the complexity of the task affects the geometric discount factor, relating to the length of time that participants may wait for reward. This in turn indicates that participants may be optimising some hidden objective function that is not dependent on the discount factor.
2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), 2015
With modern smart-phones in their pockets, over a billion people have continuous access to sensin... more With modern smart-phones in their pockets, over a billion people have continuous access to sensing, computation, and connectivity, making it possible to harness the power of the crowd to collect and share data about their surroundings and experiences on a massive scale. Crowdsensing/crowdsourcing is a novel data collection paradigm that leverages this vast mobile sensor network, making it possible to expand the scope of research endeavors and address civic issues without requiring the purchase of specialized sensors or the installation and maintenance of network infrastructure. This new data collection paradigm introduces several research challenges, including the need for usable and transparent privacy controls, incentive mechanisms to encourage data contributions, and automated and human-in-the-loop approaches that transform large-scale user-generated data sets into meaningful information that can help us to build an understanding of the physical, computational, and socio-technical environment. In response to the growing interest in this area of research, we are pleased to introduce the Second International Workshop on Crowd Assisted Sensing, Pervasive Systems and Communications (CASPer), held in conjunction with PerCom 2015. CASPer 2015 is the successor of CROWDSENSING 2014 workshop at PerCom 2014. The objective of CASPer is to provide a forum for discussion, debate, and collaboration focused on emerging ideas, trends, and recent advances in crowdsensing/crowdsourcing. In this year, the workshop includes a keynote, a panel, and 7 papers on a diverse collection of topics, including trustworthiness, incentives, and applications. This workshop would not be possible without the contributions of many colleagues. We would like to extend our thanks to the members of the Technical Program Committee who shared their expertise and insight, providing thoughtful and timely paper reviews. We also would like to thank the authors who selected this workshop as the venue for sharing their ideas and the attendees for their contributions to the discussion of important topics in crowdsensing/crowdsourcing.
ArXiv, 2021
The human ability to repurpose objects and processes is universal, but it is not a wellunderstood... more The human ability to repurpose objects and processes is universal, but it is not a wellunderstood aspect of human intelligence. Repurposing arises in everyday situations such as finding substitutes for missing ingredients when cooking, or for unavailable tools when doing DIY. It also arises in critical, unprecedented situations needing crisis management. After natural disasters and during wartime, people must repurpose the materials and processes available to make shelter, distribute food, etc. Repurposing is equally important in professional life (e.g. clinicians often repurpose medicines off-license) and in addressing societal challenges (e.g. finding new roles for waste products,). Despite the importance of repurposing, the topic has received little academic attention. By considering examples from a variety of domains such as every-day activities, drug repurposing and natural disasters, we identify some principle characteristics of the process and describe some technical challeng...
ArXiv, 2019
Nonlinear optimal control problems are often solved with numerical methods that require knowledge... more Nonlinear optimal control problems are often solved with numerical methods that require knowledge of system's dynamics which may be difficult to infer, and that carry a large computational cost associated with iterative calculations. We present a novel neurobiologically inspired hierarchical learning framework, Reinforcement Learning Optimal Control, which operates on two levels of abstraction and utilises a reduced number of controllers to solve nonlinear systems with unknown dynamics in continuous state and action spaces. Our approach is inspired by research at two levels of abstraction: first, at the level of limb coordination human behaviour is explained by linear optimal feedback control theory. Second, in cognitive tasks involving learning symbolic level action selection, humans learn such problems using model-free and model-based reinforcement learning algorithms. We propose that combining these two levels of abstraction leads to a fast global solution of nonlinear contro...
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019
Early detection of cognitive decline is important for timely intervention and treatment strategie... more Early detection of cognitive decline is important for timely intervention and treatment strategies to prevent further deterioration or development of more severe cognitive impairment, as well as identify at risk individuals for research. In this paper, we explore the feasibility of using data collected from built-in sensors of mobile phone and gameplay performance in mobile-game-based cognitive assessments. Twenty-two healthy participants took part in the two-session experiment where they were asked to take a series of standard cognitive assessments followed by playing three popular mobile games in which user-game interaction data were passively collected. The results from bivariate analysis reveal correlations between our proposed features and scores obtained from paper-based cognitive assessments. Our results show that touch gestural interaction and device motion patterns can be used as supplementary features on mobile game-based cognitive measurement. This study provides initial ...
Expert Review of Medical Devices, Sep 1, 2013
Blood glucose control, e.g. in diabetes mellitus or severe illness, requires strict adherence to ... more Blood glucose control, e.g. in diabetes mellitus or severe illness, requires strict adherence to a protocol of food, insulin-administration and exercise personalized to each patient. An artificial pancreas for automated treatment could boost quality of glucose control and patientsí ndependence. The components required for an artificial pancreas are (1) continuous glucose monitoring (CGM), (2) smart controllers and (3) insulin pumps delivering the optimal amount of insulin. In recent years, medical devices for CGM and insulin administration have undergone rapid progression and are now commercially available. Yet, clinically available devices still require regular patient or care-givers attention as they operate in open-loop control with frequent user intervention. Dosage-calculating algorithms are currently being studied in intensive care patients , for short overnight control to supplement conventional insulin delivery , and for short periods where patients rest and follow a prescribed food regime. Fully-automated algorithms that can respond to the varying activity levels seen in outpatients, with unpredictable and unreported food intake, and which provide the necessary personalized control for individuals is currently beyond the state-of-the-art. Here we review and discuss reinforcement learning algorithms, controlling insulin in a closed-loop to provide individual insulin dosing regimens that are reactive to the immediate needs of the patient. Maintaining normoglycemia is one of the major challenges in the treatment of patients with diabetes mellitus. As a key performance indicator, an average plasma glucose concentration of glycosylated hemoglobin (HbA 1C) values < 7% was recommended by the American Diabetes Association and has been shown to reduce the development and progression of microvascular and cardiovascular complications by 76%. Conversely, treatment of hyperglycemia with insulin may lead to hypoglycemia that, in turn, may contribute to clinically relevant complications. This implies that calculation of precise insulin dosages is critical, must be individually adapted, and should be reactive to the patient's glucose level. Worldwide, more than 371 million people have to manage their diabetes with both constant glucose monitoring and insulin dosing that affect their quality of life. This has led to intensive research concerning the development of an artificial pancreas since the 1970s. Such a system is composed of three components: 1. continuous glucose monitoring (CGM) using an implanted sensor, 2. an insulin pump delivering insulin, and 3. an algorithm calculating the correct dose of insulin to be applied. Since the development of the first artificial pancreas system, major improvements have been made, but the system still needs development before it can be routinely used in clinical practice. To date, one of the major limitations of the successful use of automated dosing in clinical, as well as outpatient settings, is the demand for a flexible algorithm that adapts the artificial pancreas to the special needs of each single patient. In this article, we briefly describe the major challenges for development of an artificial pancreas system and discuss the application of machine learning algorithms as a potential approach to increase the flexibility of the system. Control strategies for insulin delivery 4 Lonnroth P, Jansson PA, Smith U. A microdialysis method allowing characterization of intercellular water space in humans.
Annual Information Security Symposium, Mar 30, 2010
arXiv (Cornell University), Jun 15, 2023
Whilst cooking is a very important human activity, there has been little consideration given to h... more Whilst cooking is a very important human activity, there has been little consideration given to how we can formalize recipes for use in a reasoning framework. We address this need by proposing a graphical formalization that captures the comestibles (ingredients, intermediate food items, and final products), and the actions on comestibles in the form of a labelled bipartite graph. We then propose formal definitions for comparing recipes, for composing recipes from subrecipes, and for deconstructing recipes into subrecipes. We also introduce and compare two formal definitions for substitution into recipes which are required when there are missing ingredients, or some actions are not possible, or because there is a need to change the final product somehow.
Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies, Sep 9, 2019
Early detection of cognitive decline is important for timely intervention and treatment strategie... more Early detection of cognitive decline is important for timely intervention and treatment strategies to prevent further deterioration or development of more severe cognitive impairment, as well as identify at risk individuals for research. In this paper, we explore the feasibility of using data collected from built-in sensors of mobile phone and gameplay performance in mobile-game-based cognitive assessments. Twenty-two healthy participants took part in the two-session experiment where they were asked to take a series of standard cognitive assessments followed by playing three popular mobile games in which user-game interaction data were passively collected. The results from bivariate analysis reveal correlations between our proposed features and scores obtained from paper-based cognitive assessments. Our results show that touch gestural interaction and device motion patterns can be used as supplementary features on mobile game-based cognitive measurement. This study provides initial evidence that game related metrics on existing off-the-shelf games have potential to be used as proxies for conventional cognitive measures, specifically for visuospatial function, visual search capability, mental flexibility, memory and attention. CCS Concepts: • Human-centered computing → Touch screens; Haptic devices; Ubiquitous and mobile computing design and evaluation methods; • Applied computing → Health informatics.
Behavior Research Methods, Feb 11, 2021
The various group and category memberships that we hold are at the heart of who we are. They have... more The various group and category memberships that we hold are at the heart of who we are. They have been shown to affect our thoughts, emotions, behavior, and social relations in a variety of social contexts, and have more recently been linked to our mental and physical well-being. Questions remain, however, over the dynamics between different group memberships and the ways in which we cognitively and emotionally acquire these. In particular, current assessment methods are missing that can be applied to naturally occurring data, such as online interactions, to better understand the dynamics and impact of group memberships in naturalistic settings. To provide researchers with a method for assessing specific group memberships of interest, we have developed ASIA (Automated Social Identity Assessment), an analytical protocol that uses linguistic style indicators in text to infer which group membership is salient in a given moment, accompanied by an in-depth open-source Jupyter Notebook tutorial (https://github.com/Identity-lab/Tutorial-on-salient-social-Identity-detection-model). Here, we first discuss the challenges in the study of salient group memberships, and how ASIA can address some of these. We then demonstrate how our analytical protocol can be used to create a method for assessing which of two specific group memberships-parents and feminists-is salient using online forum data, and how the quality (validity) of the measurement and its interpretation can be tested using two further corpora as well as an experimental study. We conclude by discussing future developments in the field.
International Journal of Approximate Reasoning, Nov 1, 2016
Probabilistic programming is an area of research that aims to develop general inference algorithm... more Probabilistic programming is an area of research that aims to develop general inference algorithms for probabilistic models expressed as probabilistic programs whose execution corresponds to inferring the parameters of those models. In this paper, we introduce a probabilistic programming language (PPL) based on abductive logic programming for performing inference in probabilistic models involving categorical distributions with Dirichlet priors. We encode these models as abductive logic programs enriched with probabilistic definitions and queries, and show how to execute and compile them to boolean formulas. Using the latter, we perform generalized inference using one of two proposed Markov Chain Monte Carlo (MCMC) sampling algorithms: an adaptation of uncollapsed Gibbs sampling from related work and a novel collapsed Gibbs sampling (CGS). We show that CGS converges faster than the uncollapsed version on a latent Dirichlet allocation (LDA) task using synthetic data. On similar data, we compare our PPL with LDA-specific algorithms and other PPLs. We find that all methods, except one, perform similarly and that the more expressive the PPL, the slower it is. We illustrate applications of our PPL on real data in two variants of LDA models (Seed and Cluster LDA), and in the repeated insertion model (RIM). In the latter, our PPL yields similar conclusions to inference with EM for Mallows models.
Artificial Intelligence, Oct 1, 2020
We propose the action language EPEC-Epistemic Probabilistic Event Calculus-that supports probabil... more We propose the action language EPEC-Epistemic Probabilistic Event Calculus-that supports probabilistic, epistemic reasoning about narratives of action occurrences and environmentally triggered events, and in particular facilitates reasoning about future belief-conditioned actions and their consequences in domains that include both perfect and imperfect sensing actions. To provide a declarative semantics for sensing and belief conditioned actions in a probabilistic, narrative setting we introduce the novel concept of an epistemic reduct. We then formally compare our language with two established frameworks for probabilistic reasoning about action-the action language PAL by Baral et al, and the extension of the situation calculus to reason about noisy sensors and e↵ectors by Bacchus et al. In both cases we prove a correspondence with EPEC for a class of domains representable in both frameworks.
Lecture Notes in Computer Science, 2017
We present PEC, an Event Calculus (EC) style action language for reasoning about probabilistic ca... more We present PEC, an Event Calculus (EC) style action language for reasoning about probabilistic causal and narrative information. It has an action language style syntax similar to that of the EC variant Modular-E. Its semantics is given in terms of possible worlds which constitute possible evolutions of the domain, and builds on that of Epistemic Functional EC (EFEC). We also describe an ASP implementation of PEC and show the sense in which this is sound and complete.
Human defined concepts are inherently transferable, but it is not clear under what conditions the... more Human defined concepts are inherently transferable, but it is not clear under what conditions they can be modelled effectively by non-symbolic artificial learners. This paper argues that for a transferable concept to be learned, the system of relations that define it must be coherent across domains and properties. That is, they should be consistent with respect to relational constraints, and this consistency must extend beyond the representations encountered in the source domain. Further, where relations are modelled by differentiable functions, their gradients must conform-the functions must at times move together to preserve consistency. We propose a Partial Relation Transfer (PRT) task which exposes how well relation-decoders model these properties, and exemplify this with ordinality prediction transfer task, including a new data set for the transfer domain. We evaluate this on existing relation-decoder models, as well as a novel model designed around the principles of consistency and gradient conformity. Results show that consistency across broad regions of input space indicates good transfer performance, and that good gradient conformity facilitates consistency.
arXiv (Cornell University), Nov 13, 2020
We propose a new model for relational VAE semi-supervision capable of balancing disentanglement a... more We propose a new model for relational VAE semi-supervision capable of balancing disentanglement and low complexity modelling of relations with different symbolic properties. We compare the relative benefits of relation-decoder complexity and latent space structure on both inductive and transductive transfer learning. Our results depict a complex picture where enforcing structure on semi-supervised representations can greatly improve zero-shot transductive transfer, but may be less favourable or even impact negatively the capacity for inductive transfer.
arXiv (Cornell University), Mar 7, 2019
Nonlinear optimal control problems are often solved with numerical methods that require knowledge... more Nonlinear optimal control problems are often solved with numerical methods that require knowledge of system's dynamics which may be difficult to infer, and that carry a large computational cost associated with iterative calculations. We present a novel neurobiologically inspired hierarchical learning framework, Reinforcement Learning Optimal Control, which operates on two levels of abstraction and utilises a reduced number of controllers to solve nonlinear systems with unknown dynamics in continuous state and action spaces. Our approach is inspired by research at two levels of abstraction: first, at the level of limb coordination human behaviour is explained by linear optimal feedback control theory. Second, in cognitive tasks involving learning symbolic level action selection, humans learn such problems using model-free and model-based reinforcement learning algorithms. We propose that combining these two levels of abstraction leads to a fast global solution of nonlinear control problems using reduced number of controllers. Our framework learns the local task dynamics from naive experience and forms locally optimal infinite horizon Linear Quadratic Regulators which produce continuous low-level control. A top-level reinforcement learner uses the controllers as actions and learns how to best combine them in state space while maximising a long-term reward. A single optimal control objective function drives high-level symbolic learning by providing training signals on desirability of each selected controller. We show that a small number of locally optimal linear controllers are able to solve global nonlinear control problems with unknown dynamics when combined with a reinforcement learner in this hierarchical framework. Our algorithm competes in terms of computational cost and solution quality with sophisticated control algorithms and we illustrate this with solutions to benchmark problems.
Research Square (Research Square), Oct 4, 2022
In zooarchaeology, animal bones are normally identified using comparative macromorphological meth... more In zooarchaeology, animal bones are normally identified using comparative macromorphological methods, which involve visual comparison of the bone with reference materials. However, recent work has oppugned the reliability of these methods. Although previous studies applying macro-morphological methods to identify sheep and
arXiv (Cornell University), Jun 21, 2022
Recent efforts in interpretable deep learning models have shown that concept-based explanation me... more Recent efforts in interpretable deep learning models have shown that concept-based explanation methods achieve competitive accuracy with standard end-to-end models and enable reasoning and intervention about extracted high-level visual concepts from images, e.g., identifying the wing color and beak length for bird-species classification. However, these concept bottleneck models rely on a necessary and sufficient set of predefined concepts-which is intractable for complex tasks such as video classification. For complex tasks, the labels and the relationship between visual elements span many frames, e.g., identifying a bird flying or catching prey-necessitating concepts with various levels of abstraction. To this end, we present CoDEx, an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification. CoDEx identifies a rich set of complex concept abstractions from natural language explanations of videos-obviating the need to predefine the amorphous set of concepts. To demonstrate our method's viability, we construct two new public datasets that combine existing complex video classification datasets with short, crowd-sourced natural language explanations for their labels. Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
This work investigates the implicit discounting that humans use to compare rewards that may occur... more This work investigates the implicit discounting that humans use to compare rewards that may occur at different points in the future. We show that the way discounting is applied is not constant, but changes depending on context and in particular can be influenced by the apparent complexity of the environment. To investigate this, we conduct a series of neurophysics experiments, in which participants perform discrete-time, sequential, 2AC tasks with non-episodic characteristics and varying reward structure. The varying rewards in our games cause participants behaviour to change giving a characteristic signal of their future reward discounting. Model-free, model-based and hybrid reinforcement learning models are fit to participant data, as well as a lighter weight model which does not assume a learning mechanism. Results show that the complexity of the task affects the geometric discount factor, relating to the length of time that participants may wait for reward. This in turn indicates that participants may be optimising some hidden objective function that is not dependent on the discount factor.
2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), 2015
With modern smart-phones in their pockets, over a billion people have continuous access to sensin... more With modern smart-phones in their pockets, over a billion people have continuous access to sensing, computation, and connectivity, making it possible to harness the power of the crowd to collect and share data about their surroundings and experiences on a massive scale. Crowdsensing/crowdsourcing is a novel data collection paradigm that leverages this vast mobile sensor network, making it possible to expand the scope of research endeavors and address civic issues without requiring the purchase of specialized sensors or the installation and maintenance of network infrastructure. This new data collection paradigm introduces several research challenges, including the need for usable and transparent privacy controls, incentive mechanisms to encourage data contributions, and automated and human-in-the-loop approaches that transform large-scale user-generated data sets into meaningful information that can help us to build an understanding of the physical, computational, and socio-technical environment. In response to the growing interest in this area of research, we are pleased to introduce the Second International Workshop on Crowd Assisted Sensing, Pervasive Systems and Communications (CASPer), held in conjunction with PerCom 2015. CASPer 2015 is the successor of CROWDSENSING 2014 workshop at PerCom 2014. The objective of CASPer is to provide a forum for discussion, debate, and collaboration focused on emerging ideas, trends, and recent advances in crowdsensing/crowdsourcing. In this year, the workshop includes a keynote, a panel, and 7 papers on a diverse collection of topics, including trustworthiness, incentives, and applications. This workshop would not be possible without the contributions of many colleagues. We would like to extend our thanks to the members of the Technical Program Committee who shared their expertise and insight, providing thoughtful and timely paper reviews. We also would like to thank the authors who selected this workshop as the venue for sharing their ideas and the attendees for their contributions to the discussion of important topics in crowdsensing/crowdsourcing.
ArXiv, 2021
The human ability to repurpose objects and processes is universal, but it is not a wellunderstood... more The human ability to repurpose objects and processes is universal, but it is not a wellunderstood aspect of human intelligence. Repurposing arises in everyday situations such as finding substitutes for missing ingredients when cooking, or for unavailable tools when doing DIY. It also arises in critical, unprecedented situations needing crisis management. After natural disasters and during wartime, people must repurpose the materials and processes available to make shelter, distribute food, etc. Repurposing is equally important in professional life (e.g. clinicians often repurpose medicines off-license) and in addressing societal challenges (e.g. finding new roles for waste products,). Despite the importance of repurposing, the topic has received little academic attention. By considering examples from a variety of domains such as every-day activities, drug repurposing and natural disasters, we identify some principle characteristics of the process and describe some technical challeng...
ArXiv, 2019
Nonlinear optimal control problems are often solved with numerical methods that require knowledge... more Nonlinear optimal control problems are often solved with numerical methods that require knowledge of system's dynamics which may be difficult to infer, and that carry a large computational cost associated with iterative calculations. We present a novel neurobiologically inspired hierarchical learning framework, Reinforcement Learning Optimal Control, which operates on two levels of abstraction and utilises a reduced number of controllers to solve nonlinear systems with unknown dynamics in continuous state and action spaces. Our approach is inspired by research at two levels of abstraction: first, at the level of limb coordination human behaviour is explained by linear optimal feedback control theory. Second, in cognitive tasks involving learning symbolic level action selection, humans learn such problems using model-free and model-based reinforcement learning algorithms. We propose that combining these two levels of abstraction leads to a fast global solution of nonlinear contro...
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019
Early detection of cognitive decline is important for timely intervention and treatment strategie... more Early detection of cognitive decline is important for timely intervention and treatment strategies to prevent further deterioration or development of more severe cognitive impairment, as well as identify at risk individuals for research. In this paper, we explore the feasibility of using data collected from built-in sensors of mobile phone and gameplay performance in mobile-game-based cognitive assessments. Twenty-two healthy participants took part in the two-session experiment where they were asked to take a series of standard cognitive assessments followed by playing three popular mobile games in which user-game interaction data were passively collected. The results from bivariate analysis reveal correlations between our proposed features and scores obtained from paper-based cognitive assessments. Our results show that touch gestural interaction and device motion patterns can be used as supplementary features on mobile game-based cognitive measurement. This study provides initial ...