Ruairi O Reilly | Cork Institute of Technology (original) (raw)
Papers by Ruairi O Reilly
This research paper investigates the running of object detection algorithms on low-end devices to... more This research paper investigates the running of object detection algorithms on low-end devices to detect individuals in images while leveraging cloud-based services to provide facial verification of the individuals detected. The performance of three computer vision object detection algorithms that utilize Convolutional Neural Networks (CNN) are compared: SSD MobileNet, Inception v2 and Tiny YOLO along with three cloud-based facial verification services: Kairos, Amazon Web Service Rekognition (AWS) and Microsoft Azure Vision API. The results contribute to the limitations of running CNN based algorithms to solve vision tasks on low-end devices and highlights the limitations of using such devices and models in an application domain such as a homesecurity solution.
The quantity and quality of data available to an organisation plays an increasingly important rol... more The quantity and quality of data available to an organisation plays an increasingly important role in its operation. This data can relate to a variety of subjects, from the internal logistics to consumer sentiment towards a product in a specific market. This data provides increasingly optimal behaviour derived from its analysis, e.g. improved decision making. As more advanced, application-specific, machine learning models are developed, the organisation with the largest share of data will gain an advantage over its competitors. It is postulated that smaller entities with minority shares in data within a domain possess only a fragmented view of a market. This fragmented view puts the smaller entity at a disadvantage and enables larger entities to reap unfair, competitive advantages. This unequal dynamic should be rectified. To that end, in this work, a model is proposed that enables the consensual sharing of data between multiple parties using blockchain.
Cervical cancer is a severe concern for women’s health. Every year in the Republic of Ireland, ap... more Cervical cancer is a severe concern for women’s health. Every year in the Republic of Ireland, approximately 300 women are diagnosed with cervical cancer, 30% for whom the diagnosis will prove fatal. It is the second most common cause of death due to cancer in women aged 25 to 39 years [14]. Recently there has been a series of controversies concerning the mishandling of results from cervical screening tests, delays in processing said tests and the recalling of individuals to retake tests [12]. The serious nature of the prognosis highlights the importance and need for the timely processing and analysis of data related to screenings. This work presents a comparative analysis of several classification techniques used for the automated analysis of known risk factors and screening tests with the aim of predicting cervical cancer outcomes via a Biopsy result. These techniques encompass methods such as tree-based, clusterbased, liner and ensemble techniques, and where applicable use parame...
2012 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), 2012
ABSTRACT A system, constructed as a “proof of concept”, for providing an Irish tele-neurophysiolo... more ABSTRACT A system, constructed as a “proof of concept”, for providing an Irish tele-neurophysiology service is presented. It is based on a distributed architecture and capable of handling synchronous data streams from multiple Irish Neonatal Intensive Care Units. It provides Ireland with an infrastructure for overcoming factors affecting the diagnosis of neurological disorders in neonates. The system supports collaborative efforts by neurophysiologists, removes geographical constraints on expert knowledge and allows for the creation of national data stores while simultaneously supporting the trans-Atlantic processing of EEG. Technical obstacles affecting its successful implementation are outlined and solutions proposed. The implementation of such a system could significantly improve the quality of care provided to neonates.
2011 24th International Symposium on Computer-Based Medical Systems (CBMS), 2011
The processing of a patient in a medical facility en- compasses data acquisition, data analysis, ... more The processing of a patient in a medical facility en- compasses data acquisition, data analysis, diagnosis and medical reporting. The result is a program of treatment for the patient. It is postulated that the latter two stages can be facilitated by allowing acquisition locations to avail of data interpretation by multiple off-site analysts who interactively annotate patient data in a
Seizures are both the most common neurological emergency afflicting neonates and the most difficu... more Seizures are both the most common neurological emergency afflicting neonates and the most difficult to detect clinically. Currently, the monitoring of a multi-channel electroencephalogram (EEG) is the gold standard for seizure detection. The accurate analysis of this physiological data requires a neurophysiologist with expertise in neonatal EEG. The provisioning of this expertise on a continuous basis can be challenging for medical facilities. In this paper, we describe a cloud-based platform capable of supporting clinicians through the creation of expert knowledge repositories. While the platform is considered general purpose, in this work it is applied specifically to neonatal EEG.
The analysis of physiological data plays a significant role in medical diagnostics. While state-o... more The analysis of physiological data plays a significant role in medical diagnostics. While state-of-the-art machine learning models demonstrate high levels of performance in classifying physiological data clinicians are slow to adopt them. A contributing factor to the slow rate of adoption is the “black-box” nature of the underlying model whereby the clinician is presented with a prediction result, but the rationale for that result omitted or not presented in an interpretable manner. This gives rise to the need for interpretable machine learning models such that clinicians can verify, and rationalise, the predictions made by a model. If a clinician understands why a model makes a prediction, they will be more inclined to accept a models assistance in analysing physiological data. This paper discusses some of the latest findings in interpretable machine learning. Thereafter, based on these findings, three models are selected and implemented to analyse ECG data that are both accurate a...
Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor netwo... more Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Computational Science and Its Applications – ICCSA 2020
This article describes an exploratory study that aimed to analyse the relationship between person... more This article describes an exploratory study that aimed to analyse the relationship between personality traits and emotions. In particular, it investigates to what extent the sub-traits of the Five Factor Model has an empirically quantifiable correlation with the Basic Emotions (Anger, Anxiety, Disgust, Fear, Joy, Sadness, Surprise). If links between these personality traits and the basic emotions can be found, then this would enable an emotional-state-to-personality-trait mapping. In this study, 38 participants answered a Big Five Aspects Scale (BFAS) questionnaire and then watched 12 emotionally provocative film clips along with answering 12 short emotional Likert-scales on their emotional experiences during each film clip. The results showed that (i) four of the seven Basic Emotions outright significantly correlated, while two emotions (Fear and Disgust) approached statistical significance, with at least one of the personality traits and (ii) significant correlations between personality traits and basic emotions could only be identified at the sub-trait level, demonstrating the value in adopting a higher-resolution personality model. The results supports the long-term goal of this research, which is the enabling of state-to-trait inferences. A method for analysing and visualising such a mapping, that differentiates mappings based on the direction and magnitude of the effect size was also developed. The study contributes a blueprint towards utilising Affective Computing methodology to automatically map these phenomena.
Beats-Per-Minute (BPM) is a microservice-based platform that provides a monitoring solution for t... more Beats-Per-Minute (BPM) is a microservice-based platform that provides a monitoring solution for the continuous acquisition, analysis and visualisation of health related data. BPM combines Commercial Off-The-Self (COTS) Activity Trackers and a scalable cloud-based infrastructure. This paper demon-strates the efficacy, reliability and integrity of BPM when utilised as a monitoring solution for health conditions, such as Cardio-vascular Disease. The results are indicative of the suitability of a microservice-based architecture for such a platform.
Academic performance across Computer Science (CS) courses in the Republic of Ireland is underwhel... more Academic performance across Computer Science (CS) courses in the Republic of Ireland is underwhelming. CS undergraduates are statistically the most likely cohort in the country not to progress past year one of their studies. Insufficient motivation to pursue CS studies has been demonstrated to be a significant cause of poor CS academic performance. Goal-setting programs are an efficient, cost-effective, and student empowering way to boost motivation. Goal-setting is the formulation of a set of activities intended to motivate an individual to the desired goal state. This paper provides an experimental design for assessing the effectiveness of a written goal-setting program on academic performance concerning individual differences. Participants are randomly assigned either to the written goal-setting program or an active control task via an online platform. The goal-setting program requires participants to articulate both a desired future life and a feared future life. The program als...
2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA)
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2011
The 12th IEEE International Conference on e-Health Networking, Applications and Services, 2010
Electrocardiography (ECG) is a form of physiological data used to record the electrical activity ... more Electrocardiography (ECG) is a form of physiological data used to record the electrical activity of the heart. Numerous researchers have proposed and developed methods to extract features from the ECG signal (for example, R-R segment, P-R segment). These features can be used to analyse and classify various forms of heart arrhythmia. In this work, a method for ECG classification that employs a generalised signal pre-processing technique and uses a Multi-Layer Perceptron network to classify arrhythmia per the AAMI EC57 standard accurately is presented. The method is trained and evaluated using PhysioNet’s MIT-BIH dataset, and an average accuracy of 98.72% is achieved. The proposed methodology is comparable to state-of-the-art CNN models, both in terms of accuracy and efficiency.
The 28th Irish Conference on Artificial Intelligence and Cognitive Science, 2020
In the healthcare domain, Artificial Intelligence (AI) based systems are being increasingly adopt... more In the healthcare domain, Artificial Intelligence (AI) based systems are being increasingly adopted with applications ranging from surgical robots to automated medical diagnostics. While a Machine Learning (ML) engineer might be interested in the parameters related to the performance and accuracy of these AI-based systems, it is postulated that a medical practitioner would be more concerned with the applicability , and utility of these systems in the medical setting. However, medical practitioners are unlikely to have the prerequisite skills to enable reasonable interpretation of an AI-based system. This is a concern for two reasons. Firstly, it inhibits the adoption of systems capable of automating routine analysis work and prevents the associated productivity gains. Secondly, and perhaps more importantly, it reduces the scope of expertise available to assist in the validation, iteration, and improvement of AI-based systems in providing healthcare solutions. Explainable Artificial Intelligence (XAI) is a domain focused on techniques and approaches that facilitate the understanding and interpretation of the operation of ML models. Research interest in the domain of XAI is becoming more widespread due to the increasing adoption of AI-based solutions and the associated regulatory requirements [1]. Providing an understanding of ML models is typically approached from a Computer Science (CS) perspective [2] with a limited research emphasis being placed on supporting alternate domains [3]. In this paper, a simple, yet powerful solution for increasing the explain-ability of AI-based solutions to individuals from non-CS domains (such as medical practitioners), is presented. The proposed solution enables the explainability of ML models and the underlying workflows to be readily integrated into a standard ML workflow. Central to this solution are feature importance techniques that measure the impact of individual features on the outcomes of AI-based systems. It is envisaged that feature importance can enable a high-level understanding of a ML model and the workflow used to train the model. This could aid medical practitioners in comprehending AI-based systems and enhance their understanding of ML models' applicability and utility.
The 28th Irish Conference on Artificial Intelligence and Cognitive Science, 2020
How does the relationship between personality traits and the basic emotions vary across the modal... more How does the relationship between personality traits and the basic emotions vary across the modalities of self-report and facial expression analysis? This article presents the results of an exploratory study that quantifies consistencies and differences in personality-emotion map-pings across these two modalities. Twenty-four participants answered a personality questionnaire before watching twelve emotionally provocative videos. Participant's self-reported their emotional reactions per video, while their facial expressions were being recorded for automated emotional analysis. The results indicated that overall there was greater consistency than differences in personality-emotion mappings across the two modalities. The robustness of this relationship enables direct applications of emotional-state-to-personality-trait in academic and industrial domains.
Proceedings of the 22nd Irish Machine Vision and Image Processing Conference, 2020
Despite state of the art performance on object recognition and image classification problems, CNN... more Despite state of the art performance on object recognition and image classification problems, CNNs are considered to have two significant weaknesses. Firstly, their inability to cater for changes in object orientation, position or lighting. Secondly, their inability to deal with part-whole relationships between objects. Capsule Networks are an enhancement to CNNs to more closely model the viewpoint invariance capability of human vision. The application of Capsule Networks to well known datasets, such as MNIST and NORB, has achieved state of the art performance, while application to other datasets has had mixed results. The application of Capsule Networks to domains such as medical based imaging problems is of significant interest as they have been shown to train accurately on some datasets with limited training data. The contribution of this research is to compare the performance of a Capsule Network to a highly accurate CNN specifically developed for classification of malaria infected and uninfected cell images. It looks at how the accuracy of each model is affected by the volume of available training data, and at how robust each model is to classifying test images subjected to transformation such as rotation, shear and lighting change.
This research paper investigates the running of object detection algorithms on low-end devices to... more This research paper investigates the running of object detection algorithms on low-end devices to detect individuals in images while leveraging cloud-based services to provide facial verification of the individuals detected. The performance of three computer vision object detection algorithms that utilize Convolutional Neural Networks (CNN) are compared: SSD MobileNet, Inception v2 and Tiny YOLO along with three cloud-based facial verification services: Kairos, Amazon Web Service Rekognition (AWS) and Microsoft Azure Vision API. The results contribute to the limitations of running CNN based algorithms to solve vision tasks on low-end devices and highlights the limitations of using such devices and models in an application domain such as a homesecurity solution.
The quantity and quality of data available to an organisation plays an increasingly important rol... more The quantity and quality of data available to an organisation plays an increasingly important role in its operation. This data can relate to a variety of subjects, from the internal logistics to consumer sentiment towards a product in a specific market. This data provides increasingly optimal behaviour derived from its analysis, e.g. improved decision making. As more advanced, application-specific, machine learning models are developed, the organisation with the largest share of data will gain an advantage over its competitors. It is postulated that smaller entities with minority shares in data within a domain possess only a fragmented view of a market. This fragmented view puts the smaller entity at a disadvantage and enables larger entities to reap unfair, competitive advantages. This unequal dynamic should be rectified. To that end, in this work, a model is proposed that enables the consensual sharing of data between multiple parties using blockchain.
Cervical cancer is a severe concern for women’s health. Every year in the Republic of Ireland, ap... more Cervical cancer is a severe concern for women’s health. Every year in the Republic of Ireland, approximately 300 women are diagnosed with cervical cancer, 30% for whom the diagnosis will prove fatal. It is the second most common cause of death due to cancer in women aged 25 to 39 years [14]. Recently there has been a series of controversies concerning the mishandling of results from cervical screening tests, delays in processing said tests and the recalling of individuals to retake tests [12]. The serious nature of the prognosis highlights the importance and need for the timely processing and analysis of data related to screenings. This work presents a comparative analysis of several classification techniques used for the automated analysis of known risk factors and screening tests with the aim of predicting cervical cancer outcomes via a Biopsy result. These techniques encompass methods such as tree-based, clusterbased, liner and ensemble techniques, and where applicable use parame...
2012 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), 2012
ABSTRACT A system, constructed as a “proof of concept”, for providing an Irish tele-neurophysiolo... more ABSTRACT A system, constructed as a “proof of concept”, for providing an Irish tele-neurophysiology service is presented. It is based on a distributed architecture and capable of handling synchronous data streams from multiple Irish Neonatal Intensive Care Units. It provides Ireland with an infrastructure for overcoming factors affecting the diagnosis of neurological disorders in neonates. The system supports collaborative efforts by neurophysiologists, removes geographical constraints on expert knowledge and allows for the creation of national data stores while simultaneously supporting the trans-Atlantic processing of EEG. Technical obstacles affecting its successful implementation are outlined and solutions proposed. The implementation of such a system could significantly improve the quality of care provided to neonates.
2011 24th International Symposium on Computer-Based Medical Systems (CBMS), 2011
The processing of a patient in a medical facility en- compasses data acquisition, data analysis, ... more The processing of a patient in a medical facility en- compasses data acquisition, data analysis, diagnosis and medical reporting. The result is a program of treatment for the patient. It is postulated that the latter two stages can be facilitated by allowing acquisition locations to avail of data interpretation by multiple off-site analysts who interactively annotate patient data in a
Seizures are both the most common neurological emergency afflicting neonates and the most difficu... more Seizures are both the most common neurological emergency afflicting neonates and the most difficult to detect clinically. Currently, the monitoring of a multi-channel electroencephalogram (EEG) is the gold standard for seizure detection. The accurate analysis of this physiological data requires a neurophysiologist with expertise in neonatal EEG. The provisioning of this expertise on a continuous basis can be challenging for medical facilities. In this paper, we describe a cloud-based platform capable of supporting clinicians through the creation of expert knowledge repositories. While the platform is considered general purpose, in this work it is applied specifically to neonatal EEG.
The analysis of physiological data plays a significant role in medical diagnostics. While state-o... more The analysis of physiological data plays a significant role in medical diagnostics. While state-of-the-art machine learning models demonstrate high levels of performance in classifying physiological data clinicians are slow to adopt them. A contributing factor to the slow rate of adoption is the “black-box” nature of the underlying model whereby the clinician is presented with a prediction result, but the rationale for that result omitted or not presented in an interpretable manner. This gives rise to the need for interpretable machine learning models such that clinicians can verify, and rationalise, the predictions made by a model. If a clinician understands why a model makes a prediction, they will be more inclined to accept a models assistance in analysing physiological data. This paper discusses some of the latest findings in interpretable machine learning. Thereafter, based on these findings, three models are selected and implemented to analyse ECG data that are both accurate a...
Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor netwo... more Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Computational Science and Its Applications – ICCSA 2020
This article describes an exploratory study that aimed to analyse the relationship between person... more This article describes an exploratory study that aimed to analyse the relationship between personality traits and emotions. In particular, it investigates to what extent the sub-traits of the Five Factor Model has an empirically quantifiable correlation with the Basic Emotions (Anger, Anxiety, Disgust, Fear, Joy, Sadness, Surprise). If links between these personality traits and the basic emotions can be found, then this would enable an emotional-state-to-personality-trait mapping. In this study, 38 participants answered a Big Five Aspects Scale (BFAS) questionnaire and then watched 12 emotionally provocative film clips along with answering 12 short emotional Likert-scales on their emotional experiences during each film clip. The results showed that (i) four of the seven Basic Emotions outright significantly correlated, while two emotions (Fear and Disgust) approached statistical significance, with at least one of the personality traits and (ii) significant correlations between personality traits and basic emotions could only be identified at the sub-trait level, demonstrating the value in adopting a higher-resolution personality model. The results supports the long-term goal of this research, which is the enabling of state-to-trait inferences. A method for analysing and visualising such a mapping, that differentiates mappings based on the direction and magnitude of the effect size was also developed. The study contributes a blueprint towards utilising Affective Computing methodology to automatically map these phenomena.
Beats-Per-Minute (BPM) is a microservice-based platform that provides a monitoring solution for t... more Beats-Per-Minute (BPM) is a microservice-based platform that provides a monitoring solution for the continuous acquisition, analysis and visualisation of health related data. BPM combines Commercial Off-The-Self (COTS) Activity Trackers and a scalable cloud-based infrastructure. This paper demon-strates the efficacy, reliability and integrity of BPM when utilised as a monitoring solution for health conditions, such as Cardio-vascular Disease. The results are indicative of the suitability of a microservice-based architecture for such a platform.
Academic performance across Computer Science (CS) courses in the Republic of Ireland is underwhel... more Academic performance across Computer Science (CS) courses in the Republic of Ireland is underwhelming. CS undergraduates are statistically the most likely cohort in the country not to progress past year one of their studies. Insufficient motivation to pursue CS studies has been demonstrated to be a significant cause of poor CS academic performance. Goal-setting programs are an efficient, cost-effective, and student empowering way to boost motivation. Goal-setting is the formulation of a set of activities intended to motivate an individual to the desired goal state. This paper provides an experimental design for assessing the effectiveness of a written goal-setting program on academic performance concerning individual differences. Participants are randomly assigned either to the written goal-setting program or an active control task via an online platform. The goal-setting program requires participants to articulate both a desired future life and a feared future life. The program als...
2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA)
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2011
The 12th IEEE International Conference on e-Health Networking, Applications and Services, 2010
Electrocardiography (ECG) is a form of physiological data used to record the electrical activity ... more Electrocardiography (ECG) is a form of physiological data used to record the electrical activity of the heart. Numerous researchers have proposed and developed methods to extract features from the ECG signal (for example, R-R segment, P-R segment). These features can be used to analyse and classify various forms of heart arrhythmia. In this work, a method for ECG classification that employs a generalised signal pre-processing technique and uses a Multi-Layer Perceptron network to classify arrhythmia per the AAMI EC57 standard accurately is presented. The method is trained and evaluated using PhysioNet’s MIT-BIH dataset, and an average accuracy of 98.72% is achieved. The proposed methodology is comparable to state-of-the-art CNN models, both in terms of accuracy and efficiency.
The 28th Irish Conference on Artificial Intelligence and Cognitive Science, 2020
In the healthcare domain, Artificial Intelligence (AI) based systems are being increasingly adopt... more In the healthcare domain, Artificial Intelligence (AI) based systems are being increasingly adopted with applications ranging from surgical robots to automated medical diagnostics. While a Machine Learning (ML) engineer might be interested in the parameters related to the performance and accuracy of these AI-based systems, it is postulated that a medical practitioner would be more concerned with the applicability , and utility of these systems in the medical setting. However, medical practitioners are unlikely to have the prerequisite skills to enable reasonable interpretation of an AI-based system. This is a concern for two reasons. Firstly, it inhibits the adoption of systems capable of automating routine analysis work and prevents the associated productivity gains. Secondly, and perhaps more importantly, it reduces the scope of expertise available to assist in the validation, iteration, and improvement of AI-based systems in providing healthcare solutions. Explainable Artificial Intelligence (XAI) is a domain focused on techniques and approaches that facilitate the understanding and interpretation of the operation of ML models. Research interest in the domain of XAI is becoming more widespread due to the increasing adoption of AI-based solutions and the associated regulatory requirements [1]. Providing an understanding of ML models is typically approached from a Computer Science (CS) perspective [2] with a limited research emphasis being placed on supporting alternate domains [3]. In this paper, a simple, yet powerful solution for increasing the explain-ability of AI-based solutions to individuals from non-CS domains (such as medical practitioners), is presented. The proposed solution enables the explainability of ML models and the underlying workflows to be readily integrated into a standard ML workflow. Central to this solution are feature importance techniques that measure the impact of individual features on the outcomes of AI-based systems. It is envisaged that feature importance can enable a high-level understanding of a ML model and the workflow used to train the model. This could aid medical practitioners in comprehending AI-based systems and enhance their understanding of ML models' applicability and utility.
The 28th Irish Conference on Artificial Intelligence and Cognitive Science, 2020
How does the relationship between personality traits and the basic emotions vary across the modal... more How does the relationship between personality traits and the basic emotions vary across the modalities of self-report and facial expression analysis? This article presents the results of an exploratory study that quantifies consistencies and differences in personality-emotion map-pings across these two modalities. Twenty-four participants answered a personality questionnaire before watching twelve emotionally provocative videos. Participant's self-reported their emotional reactions per video, while their facial expressions were being recorded for automated emotional analysis. The results indicated that overall there was greater consistency than differences in personality-emotion mappings across the two modalities. The robustness of this relationship enables direct applications of emotional-state-to-personality-trait in academic and industrial domains.
Proceedings of the 22nd Irish Machine Vision and Image Processing Conference, 2020
Despite state of the art performance on object recognition and image classification problems, CNN... more Despite state of the art performance on object recognition and image classification problems, CNNs are considered to have two significant weaknesses. Firstly, their inability to cater for changes in object orientation, position or lighting. Secondly, their inability to deal with part-whole relationships between objects. Capsule Networks are an enhancement to CNNs to more closely model the viewpoint invariance capability of human vision. The application of Capsule Networks to well known datasets, such as MNIST and NORB, has achieved state of the art performance, while application to other datasets has had mixed results. The application of Capsule Networks to domains such as medical based imaging problems is of significant interest as they have been shown to train accurately on some datasets with limited training data. The contribution of this research is to compare the performance of a Capsule Network to a highly accurate CNN specifically developed for classification of malaria infected and uninfected cell images. It looks at how the accuracy of each model is affected by the volume of available training data, and at how robust each model is to classifying test images subjected to transformation such as rotation, shear and lighting change.