jamie A ward - Academia.edu (original) (raw)
Papers by jamie A ward
Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
Part of an actor's job is being able to cold read: to take words directly from the page and to re... more Part of an actor's job is being able to cold read: to take words directly from the page and to read them as if they were his or her own, often without the chance to read the lines beforehand. This is particularly difficult when two or more actors need to perform a dialogue cold. The need to hold a paper script in hand hinders the actor's ability to move freely. It also introduces a visual distraction between actors trying to engage with one another in a scene. This preliminary study uses Google Glass displayed cue cards as an alternative to traditional scripts, and compares the two approaches through a series of two-person, cold-read performances. Each performance was judged by a panel of theatre experts. The study finds that Glass has the potential to aid performance by freeing actors to better engage with one another. However, it also found that by limiting the display to one line of script at a time, the Glass application used here makes it difficult for some actors to grasp the text. In a further study, when asked to later perform the text from memory, actors who had used Glass recalled only slightly fewer lines than when they had learned using paper.
Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
This paper describes the initial stages of a new work on recognising collaborative activities inv... more This paper describes the initial stages of a new work on recognising collaborative activities involving two or more people. In the experiment described a physically demanding construction task is completed by a team of 4 volunteers. The task, to build a large video wall, requires communication, coordination, and physical collaboration between group members. Minimal outside assistance is provided to better reflect the ad-hoc and loosely structured nature of real-world construction tasks. On-body inertial measurement units (IMU) record each subject's head and arm movements; a wearable eye-tracker records gaze and egocentric video; and audio is recorded from each person's head and dominant arm. A first look at the data reveals promising correlations between, for example, the movement patterns of two people carrying a heavy object. Also revealed are clues on how complementary information from different sensor types, such as sound and vision, might further aid collaboration recognition.
Augmented Humans Conference 2021, 2021
The relationship between audience and performers is crucial to what makes live events so special.... more The relationship between audience and performers is crucial to what makes live events so special. The aim of this work is to develop a new approach amplifying the link between audiences and performers. Specifically, we explore the use of wearable sensors in gathering real-time audience data to augment the visuals of a live dance performance. We used the J!NS MEME, smart glasses with integrated electrodes enabling eye movement analysis (e.g. blink detection) and inertial motion sensing of the head (e.g. nodding recognition). This data is streamed from the audience and visualised live on stage during a performance, alongside we also collected heart rate and eye gaze of selected audience. In this paper we present the recorded dataset, including accelerometer, electrooculography(EOG), and gyroscope data from 23 audience members.
This paper explores the use of wearable eye-tracking to detect physical activities and location i... more This paper explores the use of wearable eye-tracking to detect physical activities and location information during assembly and construction tasks involving small groups of up to four people. Large physical activities, like carrying heavy items and walking, are analysed alongside more precise, hand-tool activities, like using a drill, or a screwdriver. In a first analysis, gazeinvariant features from the eye-tracker are classified (using Naive Bayes) alongside features obtained from wrist-worn accelerometers and microphones. An evaluation is presented using data from an 8-person dataset containing over 600 physical activity events, performed under real-world (noisy) conditions. Despite the challenges of working with complex, and sometimes unreliable, data we show that event-based precision and recall of 0.66 and 0.81 respectively can be achieved by combining all three sensing modalities (using experiment independent training, and temporal smoothing). In a further analysis, we apply ...
Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 2019
In this demo, we present the smart eyewear toolchain consisting of smart glasses prototypes and a... more In this demo, we present the smart eyewear toolchain consisting of smart glasses prototypes and a software platform for cognitive and social interaction assessments in the wild, with several application cases and a demonstration of activity recognition in real-time. The platform is designed to work with Jins MEME, smart EOG enabled glasses, The user software is capable data logging, posture tracking and recognition of several activities, such as talking, reading and blinking. During the demonstration we will walk through several applications and studies that the platform has been used for.
2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), 2017
This paper presents a method of using wearable accelerometers and microphones to detect instances... more This paper presents a method of using wearable accelerometers and microphones to detect instances of ad-hoc physical collaborations between members of a group. 4 people are instructed to construct a large video wall and must cooperate to complete the task. The task is loosely structured with minimal outside assistance to better reflect the ad-hoc nature of many real world construction scenarios. Audio data, recorded from chestworn microphones, is used to reveal information on collocation, i.e. whether or not participants are near one another. Movement data, recorded using 3-axis accelerometers worn on each person's head and wrists, is used to provide information on correlated movements, such as when participants help one another to lift a heavy object. Collocation and correlated movement information is then combined to determine who is working together at any given time. The work shows how data from commonly available sensors can be combined across multiple people using a simple, low power algorithm to detect a range of physical collaborations.
Abstract. Wearable computers promise the ability to access information and computing resources di... more Abstract. Wearable computers promise the ability to access information and computing resources directly from miniature devices embedded in our clothing. The problem lies in how to access the most relevant information without disrupting whatever task it is we are doing. Most existing interfaces, such as keyboards and touch pads, require direct interaction. This is both a physical and cognitive distraction. The problem is particularly acute for the mobile maintenance worker who must access information, such as on-line manuals or schematics, quickly and with minimal distraction. One solution is a wearable computer that monitors the user’s ‘context’ - information such as activity, location and environment. Being ‘context aware’, the wearable would be better placed to offer relevant information to the user as and when it is needed. In this work we focus on recognising one of the most important parts of context: user activity. The contributions of the thesis are twofold. First, we present...
PLOS ONE, 2021
When people interact, they fall into synchrony. This synchrony has been demonstrated in a range o... more When people interact, they fall into synchrony. This synchrony has been demonstrated in a range of contexts, from walking or playing music together to holding a conversation, and has been linked to prosocial outcomes such as development of rapport and efficiency of cooperation. While the basis of synchrony remains unclear, several studies have found synchrony to increase when an interaction is made challenging, potentially providing a means of facilitating interaction. Here we focus on head movement during free conversation. As verbal information is obscured when conversing over background noise, we investigate whether synchrony is greater in high vs low levels of noise, as well as addressing the effect of background noise complexity. Participants held a series of conversations with unfamiliar interlocutors while seated in a lab, and the background noise level changed every 15-30s between 54, 60, 66, 72, and 78 dB. We report measures of head movement synchrony recorded via high-reso...
Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 2019
Computing devices worn on the human body have a long history in academic and industrial research,... more Computing devices worn on the human body have a long history in academic and industrial research, most importantly in wearable computing, mobile eye tracking, and mobile mixed and augmented reality. As humans receive most of their sensory input via the head, it is a very interesting body location for simultaneous sensing and interaction as well as cognitive assistance. Eyewear Computing devices have recently emerged as commercial products and can provide an research platform for a range of fields, including humancomputer interaction, ubiquitous computing, pervasive sensing, psychology and social sciences. The proposed workshop will bring together researchers from a wide range of disciplines, such as mobile and ubiquitous computing, eye tracking, optics, computer vision, human vision and perception, usability, as well as systems research. This year it will also bring in researchers from psychology, with a focus on the social and interpersonal aspects of eyewear technology. The workshop is a continuation from 2016/2018 and will focus on discussing application scenarios as well as focusing on eyewear sensing and supporting social interactions. CCS CONCEPTS • Computer systems organization → Embedded systems.
Proceedings of the 2018 ACM International Symposium on Wearable Computers, 2018
This paper introduces the idea of using wearable, multimodal body and brain sensing, in a theatri... more This paper introduces the idea of using wearable, multimodal body and brain sensing, in a theatrical setting, for neuroscientific research. Wearable motion capture suits are used to track the body movements of two actors while they enact a sequence of scenes together. One actor additionally wears a functional near-infrared spectroscopy (fNIRS)-based headgear to record the activation patterns on his prefrontal cortex. Repetitions in the movement data are then used to automatically segment the fNIRS data for further analysis. This exploration reveals that the semi-structured and repeatable nature of theatre can provide a useful laboratory for neuroscience, and that wearable sensing is a promising method to achieve this. This is important because it points to a new way of researching the brain in a more natural, and social, environment than traditional lab-based methods.
Proceedings of the 2018 ACM International Symposium on Wearable Computers, 2018
We introduce a method of using wrist-worn accelerometers to measure non-verbal social coordinatio... more We introduce a method of using wrist-worn accelerometers to measure non-verbal social coordination within a group that includes autistic children. Our goal was to record and chart the children's social engagement-measured using interpersonal movement synchrony-as they took part in a theatrical workshop that was specifically designed to enhance their social skills. Interpersonal synchrony, an important factor of social engagement that is known to be impaired in autism, is calculated using a cross-wavelet similarity comparison between participants' movement data. We evaluate the feasibility of the approach over 3 live performances, each lasting 2 hours, using 6 actors and a total of 10 autistic children. We show that by visualising each child's engagement over the course of a performance, it is possible to highlight subtle moments of social coordination that might otherwise be lost when reviewing video footage alone. This is important because it points the way to a new method for people who work with autistic children to be able to monitor the development of those in their care, and to adapt their therapeutic activities accordingly.
Autism, 2020
Communication with others relies on coordinated exchanges of social signals, such as eye gaze and... more Communication with others relies on coordinated exchanges of social signals, such as eye gaze and facial displays. However, this can only happen when partners are able to see each other. Although previous studies report that autistic individuals have difficulties in planning eye gaze and making facial displays during conversation, evidence from real-life dyadic tasks is scarce and mixed. Across two studies, here we investigate how eye gaze and facial displays of typical and high-functioning autistic individuals are modulated by the belief in being seen and potential to show true gaze direction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video, video-call and face-to-face. Typical participants gazed less to the confederate and produced more facial displays when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and...
IEEE Pervasive Computing, 2020
The International Symposium on Wearable Computers (ISWC) has been the leading research venue for ... more The International Symposium on Wearable Computers (ISWC) has been the leading research venue for wearable technology research since 1997. In 2019, the 23rd ISWC was held in London, U.K., from Sep. 9th-13th. Following on the last eight years of successful collaboration, ISWC was colocated with the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). ISWC BY THE NUMBERS & THE PROGRAM COMMITTEE reviewed 125 complete submissions. The papers had authors from 25 countries, with a balanced representation from the Americas, Asia/Oceania, and Europe. The International Symposium on Wearable Computers (ISWC) accepts papers in three
Conversation between two people involves subtle non-verbal coordination but the parameters and ti... more Conversation between two people involves subtle non-verbal coordination but the parameters and timing of this coordination remain unclear, which limits our models of social coordination mechanisms. We implemented high-resolution motion capture of human head motion during structured conversations. Using pre-registered analyses, we quantify cross-participant wavelet coherence of head motion as a measure of non-verbal coordination, and report two novel results. First, head pitch (nodding) at 2.6 – 6.5 Hz shows below-chance coherence between people. This is driven by fast-nodding behaviour from the person listening, and is a newly defined nonverbal behaviour which may act as an important social signal. Second, head pitch movements at 0.2-1.1 Hz show above-chance coherence with a constant lag of around 600msec between a leader and follower. This is consistent with reactive (rather than predictive) models of mimicry behaviour. These results provide a step towards the quantification of rea...
2005 IEEE International Conference on Multimedia and Expo
We describe our initial efforts to learn high level human behaviors from low level gestures obser... more We describe our initial efforts to learn high level human behaviors from low level gestures observed using on-body sensors. Such an activity discovery system could be used to index captured journals of a person's life automatically. In a medical context, an annotated journal could assist therapists in helping to describe and treat symptoms characteristic to behavioral syndromes such as autism. We review our current work on user-independent activity recognition from continuous data where we identify "interesting" user gestures through a combination of acceleration and audio sensors placed on the user's wrists and elbows. We examine an algorithm that can take advantage of such a sensor framework to automatically discover and label recurring behaviors, and we suggest future work where correlations of these low level gestures may indicate higher level activities.
Lecture Notes in Computer Science, 2006
Evaluating the performance of a continuous activity recognition system can be a challenging probl... more Evaluating the performance of a continuous activity recognition system can be a challenging problem. To-date there is no widely accepted standard for dealing with this, and in general methods and measures are adapted from related fields such as speech and vision. Much of the problem stems from the often imprecise and ambiguous nature of the real-world events that an activity recognition system has to deal with. A recognised event might have variable duration, or be shifted in time from the corresponding real-world event. Equally it might be broken up into smaller pieces, or joined together to form larger events. Most evaluation attempts tend to smooth over these issues, using "fuzzy" boundaries, or some other parameter based error decision, so as to make possible the use of standard performance measures (such as insertions and deletions.) However, we argue that reducing the various facets of a activity system into limited error categories-that were originally intended for different problem domains-can be overly restrictive. In this paper we attempt to identify and characterise the errors typical to continuous activity recognition, and develop a method for quantifying them in an unambiguous manner. By way of an initial investigation, we apply the method to an example taken from previous work, and discuss the advantages that this provides over two of the most commonly used methods.
Lecture Notes in Computer Science, 2008
In this work we analyse the eye movements of people in transit in an everyday environment using a... more In this work we analyse the eye movements of people in transit in an everyday environment using a wearable electrooculographic (EOG) system. We compare three approaches for continuous recognition of reading activities: a string matching algorithm which exploits typical characteristics of reading signals, such as saccades and xations; and two variants of Hidden Markov Models (HMMs)-mixed Gaussian and discrete. The recognition algorithms are evaluated in an experiment performed with eight subjects reading freely chosen text without pictures while sitting at a desk, standing, walking indoors and outdoors, and riding a tram. A total dataset of roughly 6 hours was collected with reading activity accounting for about half of the time. We were able to detect reading activities over all subjects with a top recognition rate of 80.2% (71.0% recall, 11.6% false positives) using string matching. We show that EOG is a potentially robust technique for reading recognition across a number of typical daily situations.
Lecture Notes in Computer Science, 2004
Most gesture recognition systems analyze gestures intended for communication (e.g. sign language)... more Most gesture recognition systems analyze gestures intended for communication (e.g. sign language) or for command (e.g. navigation in a virtual world). We attempt instead to recognize gestures made in the course of performing everyday work activities. Specifically, we examine activities in a wood shop, both in isolation as well as in the context of a simulated assembly task. We apply linear discriminant analysis (LDA) and hidden Markov model (HMM) techniques to features derived from body-worn accelerometers and microphones. The resulting system can successfully segment and identify most shop activities with zero false positives and 83.5% accuracy.
Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies, 2005
We perform continuous activity recognition using only two wrist-worn sensors-a 3-axis acceleromet... more We perform continuous activity recognition using only two wrist-worn sensors-a 3-axis accelerometer and a microphone. We build on the intuitive notion that two very different sensors are unlikely to agree in classification of a false activity. By comparing imperfect, sliding window classifications from each of these sensors, we are able discern activities of interest from null or uninteresting activities. Where one sensor alone is unable to perform such partitioning, using comparison we are able to report good overall system performance of up to 70% accuracy. In presenting these results, we attempt to give a more-in depth visualization of the errors than can be gathered from confusion matrices alone.
Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
Part of an actor's job is being able to cold read: to take words directly from the page and to re... more Part of an actor's job is being able to cold read: to take words directly from the page and to read them as if they were his or her own, often without the chance to read the lines beforehand. This is particularly difficult when two or more actors need to perform a dialogue cold. The need to hold a paper script in hand hinders the actor's ability to move freely. It also introduces a visual distraction between actors trying to engage with one another in a scene. This preliminary study uses Google Glass displayed cue cards as an alternative to traditional scripts, and compares the two approaches through a series of two-person, cold-read performances. Each performance was judged by a panel of theatre experts. The study finds that Glass has the potential to aid performance by freeing actors to better engage with one another. However, it also found that by limiting the display to one line of script at a time, the Glass application used here makes it difficult for some actors to grasp the text. In a further study, when asked to later perform the text from memory, actors who had used Glass recalled only slightly fewer lines than when they had learned using paper.
Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
This paper describes the initial stages of a new work on recognising collaborative activities inv... more This paper describes the initial stages of a new work on recognising collaborative activities involving two or more people. In the experiment described a physically demanding construction task is completed by a team of 4 volunteers. The task, to build a large video wall, requires communication, coordination, and physical collaboration between group members. Minimal outside assistance is provided to better reflect the ad-hoc and loosely structured nature of real-world construction tasks. On-body inertial measurement units (IMU) record each subject's head and arm movements; a wearable eye-tracker records gaze and egocentric video; and audio is recorded from each person's head and dominant arm. A first look at the data reveals promising correlations between, for example, the movement patterns of two people carrying a heavy object. Also revealed are clues on how complementary information from different sensor types, such as sound and vision, might further aid collaboration recognition.
Augmented Humans Conference 2021, 2021
The relationship between audience and performers is crucial to what makes live events so special.... more The relationship between audience and performers is crucial to what makes live events so special. The aim of this work is to develop a new approach amplifying the link between audiences and performers. Specifically, we explore the use of wearable sensors in gathering real-time audience data to augment the visuals of a live dance performance. We used the J!NS MEME, smart glasses with integrated electrodes enabling eye movement analysis (e.g. blink detection) and inertial motion sensing of the head (e.g. nodding recognition). This data is streamed from the audience and visualised live on stage during a performance, alongside we also collected heart rate and eye gaze of selected audience. In this paper we present the recorded dataset, including accelerometer, electrooculography(EOG), and gyroscope data from 23 audience members.
This paper explores the use of wearable eye-tracking to detect physical activities and location i... more This paper explores the use of wearable eye-tracking to detect physical activities and location information during assembly and construction tasks involving small groups of up to four people. Large physical activities, like carrying heavy items and walking, are analysed alongside more precise, hand-tool activities, like using a drill, or a screwdriver. In a first analysis, gazeinvariant features from the eye-tracker are classified (using Naive Bayes) alongside features obtained from wrist-worn accelerometers and microphones. An evaluation is presented using data from an 8-person dataset containing over 600 physical activity events, performed under real-world (noisy) conditions. Despite the challenges of working with complex, and sometimes unreliable, data we show that event-based precision and recall of 0.66 and 0.81 respectively can be achieved by combining all three sensing modalities (using experiment independent training, and temporal smoothing). In a further analysis, we apply ...
Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 2019
In this demo, we present the smart eyewear toolchain consisting of smart glasses prototypes and a... more In this demo, we present the smart eyewear toolchain consisting of smart glasses prototypes and a software platform for cognitive and social interaction assessments in the wild, with several application cases and a demonstration of activity recognition in real-time. The platform is designed to work with Jins MEME, smart EOG enabled glasses, The user software is capable data logging, posture tracking and recognition of several activities, such as talking, reading and blinking. During the demonstration we will walk through several applications and studies that the platform has been used for.
2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), 2017
This paper presents a method of using wearable accelerometers and microphones to detect instances... more This paper presents a method of using wearable accelerometers and microphones to detect instances of ad-hoc physical collaborations between members of a group. 4 people are instructed to construct a large video wall and must cooperate to complete the task. The task is loosely structured with minimal outside assistance to better reflect the ad-hoc nature of many real world construction scenarios. Audio data, recorded from chestworn microphones, is used to reveal information on collocation, i.e. whether or not participants are near one another. Movement data, recorded using 3-axis accelerometers worn on each person's head and wrists, is used to provide information on correlated movements, such as when participants help one another to lift a heavy object. Collocation and correlated movement information is then combined to determine who is working together at any given time. The work shows how data from commonly available sensors can be combined across multiple people using a simple, low power algorithm to detect a range of physical collaborations.
Abstract. Wearable computers promise the ability to access information and computing resources di... more Abstract. Wearable computers promise the ability to access information and computing resources directly from miniature devices embedded in our clothing. The problem lies in how to access the most relevant information without disrupting whatever task it is we are doing. Most existing interfaces, such as keyboards and touch pads, require direct interaction. This is both a physical and cognitive distraction. The problem is particularly acute for the mobile maintenance worker who must access information, such as on-line manuals or schematics, quickly and with minimal distraction. One solution is a wearable computer that monitors the user’s ‘context’ - information such as activity, location and environment. Being ‘context aware’, the wearable would be better placed to offer relevant information to the user as and when it is needed. In this work we focus on recognising one of the most important parts of context: user activity. The contributions of the thesis are twofold. First, we present...
PLOS ONE, 2021
When people interact, they fall into synchrony. This synchrony has been demonstrated in a range o... more When people interact, they fall into synchrony. This synchrony has been demonstrated in a range of contexts, from walking or playing music together to holding a conversation, and has been linked to prosocial outcomes such as development of rapport and efficiency of cooperation. While the basis of synchrony remains unclear, several studies have found synchrony to increase when an interaction is made challenging, potentially providing a means of facilitating interaction. Here we focus on head movement during free conversation. As verbal information is obscured when conversing over background noise, we investigate whether synchrony is greater in high vs low levels of noise, as well as addressing the effect of background noise complexity. Participants held a series of conversations with unfamiliar interlocutors while seated in a lab, and the background noise level changed every 15-30s between 54, 60, 66, 72, and 78 dB. We report measures of head movement synchrony recorded via high-reso...
Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 2019
Computing devices worn on the human body have a long history in academic and industrial research,... more Computing devices worn on the human body have a long history in academic and industrial research, most importantly in wearable computing, mobile eye tracking, and mobile mixed and augmented reality. As humans receive most of their sensory input via the head, it is a very interesting body location for simultaneous sensing and interaction as well as cognitive assistance. Eyewear Computing devices have recently emerged as commercial products and can provide an research platform for a range of fields, including humancomputer interaction, ubiquitous computing, pervasive sensing, psychology and social sciences. The proposed workshop will bring together researchers from a wide range of disciplines, such as mobile and ubiquitous computing, eye tracking, optics, computer vision, human vision and perception, usability, as well as systems research. This year it will also bring in researchers from psychology, with a focus on the social and interpersonal aspects of eyewear technology. The workshop is a continuation from 2016/2018 and will focus on discussing application scenarios as well as focusing on eyewear sensing and supporting social interactions. CCS CONCEPTS • Computer systems organization → Embedded systems.
Proceedings of the 2018 ACM International Symposium on Wearable Computers, 2018
This paper introduces the idea of using wearable, multimodal body and brain sensing, in a theatri... more This paper introduces the idea of using wearable, multimodal body and brain sensing, in a theatrical setting, for neuroscientific research. Wearable motion capture suits are used to track the body movements of two actors while they enact a sequence of scenes together. One actor additionally wears a functional near-infrared spectroscopy (fNIRS)-based headgear to record the activation patterns on his prefrontal cortex. Repetitions in the movement data are then used to automatically segment the fNIRS data for further analysis. This exploration reveals that the semi-structured and repeatable nature of theatre can provide a useful laboratory for neuroscience, and that wearable sensing is a promising method to achieve this. This is important because it points to a new way of researching the brain in a more natural, and social, environment than traditional lab-based methods.
Proceedings of the 2018 ACM International Symposium on Wearable Computers, 2018
We introduce a method of using wrist-worn accelerometers to measure non-verbal social coordinatio... more We introduce a method of using wrist-worn accelerometers to measure non-verbal social coordination within a group that includes autistic children. Our goal was to record and chart the children's social engagement-measured using interpersonal movement synchrony-as they took part in a theatrical workshop that was specifically designed to enhance their social skills. Interpersonal synchrony, an important factor of social engagement that is known to be impaired in autism, is calculated using a cross-wavelet similarity comparison between participants' movement data. We evaluate the feasibility of the approach over 3 live performances, each lasting 2 hours, using 6 actors and a total of 10 autistic children. We show that by visualising each child's engagement over the course of a performance, it is possible to highlight subtle moments of social coordination that might otherwise be lost when reviewing video footage alone. This is important because it points the way to a new method for people who work with autistic children to be able to monitor the development of those in their care, and to adapt their therapeutic activities accordingly.
Autism, 2020
Communication with others relies on coordinated exchanges of social signals, such as eye gaze and... more Communication with others relies on coordinated exchanges of social signals, such as eye gaze and facial displays. However, this can only happen when partners are able to see each other. Although previous studies report that autistic individuals have difficulties in planning eye gaze and making facial displays during conversation, evidence from real-life dyadic tasks is scarce and mixed. Across two studies, here we investigate how eye gaze and facial displays of typical and high-functioning autistic individuals are modulated by the belief in being seen and potential to show true gaze direction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video, video-call and face-to-face. Typical participants gazed less to the confederate and produced more facial displays when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and...
IEEE Pervasive Computing, 2020
The International Symposium on Wearable Computers (ISWC) has been the leading research venue for ... more The International Symposium on Wearable Computers (ISWC) has been the leading research venue for wearable technology research since 1997. In 2019, the 23rd ISWC was held in London, U.K., from Sep. 9th-13th. Following on the last eight years of successful collaboration, ISWC was colocated with the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). ISWC BY THE NUMBERS & THE PROGRAM COMMITTEE reviewed 125 complete submissions. The papers had authors from 25 countries, with a balanced representation from the Americas, Asia/Oceania, and Europe. The International Symposium on Wearable Computers (ISWC) accepts papers in three
Conversation between two people involves subtle non-verbal coordination but the parameters and ti... more Conversation between two people involves subtle non-verbal coordination but the parameters and timing of this coordination remain unclear, which limits our models of social coordination mechanisms. We implemented high-resolution motion capture of human head motion during structured conversations. Using pre-registered analyses, we quantify cross-participant wavelet coherence of head motion as a measure of non-verbal coordination, and report two novel results. First, head pitch (nodding) at 2.6 – 6.5 Hz shows below-chance coherence between people. This is driven by fast-nodding behaviour from the person listening, and is a newly defined nonverbal behaviour which may act as an important social signal. Second, head pitch movements at 0.2-1.1 Hz show above-chance coherence with a constant lag of around 600msec between a leader and follower. This is consistent with reactive (rather than predictive) models of mimicry behaviour. These results provide a step towards the quantification of rea...
2005 IEEE International Conference on Multimedia and Expo
We describe our initial efforts to learn high level human behaviors from low level gestures obser... more We describe our initial efforts to learn high level human behaviors from low level gestures observed using on-body sensors. Such an activity discovery system could be used to index captured journals of a person's life automatically. In a medical context, an annotated journal could assist therapists in helping to describe and treat symptoms characteristic to behavioral syndromes such as autism. We review our current work on user-independent activity recognition from continuous data where we identify "interesting" user gestures through a combination of acceleration and audio sensors placed on the user's wrists and elbows. We examine an algorithm that can take advantage of such a sensor framework to automatically discover and label recurring behaviors, and we suggest future work where correlations of these low level gestures may indicate higher level activities.
Lecture Notes in Computer Science, 2006
Evaluating the performance of a continuous activity recognition system can be a challenging probl... more Evaluating the performance of a continuous activity recognition system can be a challenging problem. To-date there is no widely accepted standard for dealing with this, and in general methods and measures are adapted from related fields such as speech and vision. Much of the problem stems from the often imprecise and ambiguous nature of the real-world events that an activity recognition system has to deal with. A recognised event might have variable duration, or be shifted in time from the corresponding real-world event. Equally it might be broken up into smaller pieces, or joined together to form larger events. Most evaluation attempts tend to smooth over these issues, using "fuzzy" boundaries, or some other parameter based error decision, so as to make possible the use of standard performance measures (such as insertions and deletions.) However, we argue that reducing the various facets of a activity system into limited error categories-that were originally intended for different problem domains-can be overly restrictive. In this paper we attempt to identify and characterise the errors typical to continuous activity recognition, and develop a method for quantifying them in an unambiguous manner. By way of an initial investigation, we apply the method to an example taken from previous work, and discuss the advantages that this provides over two of the most commonly used methods.
Lecture Notes in Computer Science, 2008
In this work we analyse the eye movements of people in transit in an everyday environment using a... more In this work we analyse the eye movements of people in transit in an everyday environment using a wearable electrooculographic (EOG) system. We compare three approaches for continuous recognition of reading activities: a string matching algorithm which exploits typical characteristics of reading signals, such as saccades and xations; and two variants of Hidden Markov Models (HMMs)-mixed Gaussian and discrete. The recognition algorithms are evaluated in an experiment performed with eight subjects reading freely chosen text without pictures while sitting at a desk, standing, walking indoors and outdoors, and riding a tram. A total dataset of roughly 6 hours was collected with reading activity accounting for about half of the time. We were able to detect reading activities over all subjects with a top recognition rate of 80.2% (71.0% recall, 11.6% false positives) using string matching. We show that EOG is a potentially robust technique for reading recognition across a number of typical daily situations.
Lecture Notes in Computer Science, 2004
Most gesture recognition systems analyze gestures intended for communication (e.g. sign language)... more Most gesture recognition systems analyze gestures intended for communication (e.g. sign language) or for command (e.g. navigation in a virtual world). We attempt instead to recognize gestures made in the course of performing everyday work activities. Specifically, we examine activities in a wood shop, both in isolation as well as in the context of a simulated assembly task. We apply linear discriminant analysis (LDA) and hidden Markov model (HMM) techniques to features derived from body-worn accelerometers and microphones. The resulting system can successfully segment and identify most shop activities with zero false positives and 83.5% accuracy.
Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies, 2005
We perform continuous activity recognition using only two wrist-worn sensors-a 3-axis acceleromet... more We perform continuous activity recognition using only two wrist-worn sensors-a 3-axis accelerometer and a microphone. We build on the intuitive notion that two very different sensors are unlikely to agree in classification of a false activity. By comparing imperfect, sliding window classifications from each of these sensors, we are able discern activities of interest from null or uninteresting activities. Where one sensor alone is unable to perform such partitioning, using comparison we are able to report good overall system performance of up to 70% accuracy. In presenting these results, we attempt to give a more-in depth visualization of the errors than can be gathered from confusion matrices alone.