Shujie Deng | King's College London (original) (raw)
Papers by Shujie Deng
European Journal of Echocardiography, 2020
Background: 3D printing is used for surgical planning of complex congenital heart disease (CHD) b... more Background: 3D printing is used for surgical planning of complex congenital heart disease (CHD) because it provides an intuitive 3D representation of the image data. However, the 3D print is static and it can be costly and time consuming to create. Virtual Reality (VR) is a cheaper alternative that is able to visualise volumetric images in 3D directly from the scanner, both statically (CT and MR) and dynamically (cardiac ultrasound). However, VR visualisation is not as tangible as a 3D print-this is because it lacks the haptic feedback which would make the interactions feel more natural. Purpose: Evaluate if adding haptic feedback (vibration) to the visualisation of volume image data in VR improves measurement accuracy and user experience. Method: We evaluated the effect of vibration haptic feedback in our VR system using a synthetic cylinder volume dataset. The cylinder was displayed in two conditions: (1) with no haptic feedback, and (2) with haptic feedback. Ten non-clinical participants volunteered in the evaluation. They were blinded to these two test conditions. The participants were asked to measure the cylinder's diameter horizontally and vertically, and its length, in each test condition. The measurement results were compared to the ground truth to assess the measurement accuracy. Each participant also completed a questionnaire comparing their experience of the two test conditions during the experiment. Results: The results show a marginal improvement of measurement accuracy with haptic feedback, compared to no haptics (see Figure a). However, this improvement was not statistically significant. The haptic feedback did improve the participants' confidence about their performance and increased the ease of use in VR, hence, they preferred the haptics condition to the no haptics condition (see Figure b). Moreover, although 70% of the participants reported relying on the visual cue more than on the haptic cue, 90% found that the haptic cue was helpful for deciding where to place the measurement point. Also, 88.9% of the participants felt more immersed in the VR scene with haptic feedback. Conclusion: Our evaluation suggests that although haptic feedback may only marginally improve measurement accuracy, participants nevertheless preferred it because it improved confidence in their performance, increased ease of use, and facilitated a more immersive user experience.
Ultrasound in Obstetrics & Gynecology, 2021
European Journal of Echocardiography, 2020
Background/Introduction Virtual Reality (VR) has recently gained great interest for examining 3D ... more Background/Introduction Virtual Reality (VR) has recently gained great interest for examining 3D images from congenital heart disease (CHD) patients. Currently, 3D printed models of the heart may be used for particularly complex cases. These have been found to be intuitive and to positively impact clinical decision-making. Although positively received, such printed models must be segmented from the image data, generally only CT/MR may be used, the prints are static, and models do not allow for cropping / slicing or easy manipulation. Our VR system is designed to address these issues, as well as providing a simple interface compared to standard software. Building such a VR system, one with intuitive interaction which is clinically useful, requires studying user acceptance and requirements. Purpose: We evaluate the usability of our VR system: can a prototype VR system be easily learned and used by clinicians unfamiliar with VR. Method: We tested a VR system which can display 3D echo images and enables the user to interact with them, for instance by translating, rotating and cropping. Our system is tested on a transoesophageal echocardiogram from a patient with aortic valve disease. 13 clinicians evaluated the system including 5 imaging cardiologists, 5 physiologists, 2 surgeons and an interventionist, with their clinical experience ranging from trainee to more than 5 years' of experience. None had used VR regularly in the past. After a brief training session, they were asked to place three anatomical landmarks and identify a particular cardiac view. They then completed a questionnaire on system ease of learning and image manipulation. Results: Results are shown in the figure below. Learning to use the system was perceived as easy for all but one participant, who rated it as 'Somewhat difficult'. However, once trained, all users found the system easy to use. Participants found the interaction, where objects in the scene are picked up using the controller and then track the controller's motion in a 1:1 way, to be particularly easy to learn and use. Conclusion: Our VR system was accepted by the vast majority of clinicians, both for ease of learning and use. Intuitiveness and the ability to interact with images in a natural way were highlighted as most useful-suggesting that such a system could become accepted for routine clinical use in the future.
The International Journal of Cardiovascular Imaging
Extended reality (XR), which encompasses virtual, augmented and mixed reality, is an emerging med... more Extended reality (XR), which encompasses virtual, augmented and mixed reality, is an emerging medical imaging display platform which enables intuitive and immersive interaction in a three-dimensional space. This technology holds the potential to enhance understanding of complex spatial relationships when planning and guiding cardiac procedures in congenital and structural heart disease moving beyond conventional 2D and 3D image displays. A systematic review of the literature demonstrates a rapid increase in publications describing adoption of this technology. At least 33 XR systems have been described, with many demonstrating proof of concept, but with no specific mention of regulatory approval including some prospective studies. Validation remains limited, and true clinical benefit difficult to measure. This review describes and critically appraises the range of XR technologies and its applications for procedural planning and guidance in structural heart disease while discussing th...
Medical Image Analysis
Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) hig... more Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra-and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
Journal of Imaging
This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line m... more This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line measurement tool using phantom data across three cardiac imaging modalities: three-dimensional echocardiography (3DE), computed tomography (CT) and magnetic resonance imaging (MRI). The same phantoms were also measured using industry-standard image visualisation software packages. Two participants performed blinded measurements on volume-rendered images of standard phantoms both in VR and on an industry-standard image visualisation platform. The intra- and interrater reliability of the VR measurement method was evaluated by intraclass correlation coefficient (ICC) and coefficient of variance (CV). Measurement accuracy was analysed using Bland–Altman and mean absolute percentage error (MAPE). VR measurements showed good intra- and interobserver reliability (ICC ≥ 0.99, p < 0.05; CV < 10%) across all imaging modalities. MAPE for VR measurements compared to ground truth were 1.6%, 1.6%...
Computer Graphics Forum, 2020
Pressed by a glass plate from above, an elastoplastic object filled with liquid undergoes plastic... more Pressed by a glass plate from above, an elastoplastic object filled with liquid undergoes plastic deformation, and the liquid inside breaks out due to the increasing pressure.
European Heart Journal - Cardiovascular Imaging, 2020
Funding Acknowledgements Work supported by the NIHR i4i funded 3D Heart project [II-LA-0716-20001... more Funding Acknowledgements Work supported by the NIHR i4i funded 3D Heart project [II-LA-0716-20001] Background/Introduction Cardiac measurements are clinically important and are invariably required in any clinical imaging software. The advent of Virtual Reality (VR) imaging systems is introducing intuitive and natural ways of visualising and interrogating echo images in a 3D environment. The 3D nature of the VR experience requires purpose-designed measurement tools, which may benefit from better depth perception and easier localisation of 3D landmarks. Purpose Comparison of the accuracy of our VR 3D linear measurement system to commercial clinical imaging software, using both multi-plane reformatting (MPR) and volume rendered views. Method Each virtual reality measurement was made by selecting two points in 3D, directly in the volume rendering. The participants could edit the measurements until satisfied with their accuracy. 5 expert clinicians carried out 26 measurements each - 6 me...
International Journal of Serious Games, 2014
The goal of this review is to illustrate the emerging use of multimodal virtual reality that can ... more The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications.
In 3D echocardiography (3D echo), the image orientation varies depending on the position and dire... more In 3D echocardiography (3D echo), the image orientation varies depending on the position and direction of the transducer during examination. As a result, when reviewing images the user must initially identify anatomical landmarks to understand image orientation-a potentially challenging and time-consuming task. We automated this initial step by training a deep residual neural network (ResNet) to predict the rotation required to reorient an image to the standard apical fourchamber view). Three data pre-processing strategies were explored: 2D, 2.5D and 3D. Three different loss function strategies were investigated: classification of discrete integer angles, regression with mean absolute angle error loss, and regression with geodesic loss. We then integrated the model into a virtual reality application and aligned the reoriented 3D echo images with a standard anatomical heart model. The deep learning strategy with the highest accuracy-2.5D classification of discrete integer angles-achieved a mean absolute angle error on the test set of 9.0 •. This work demonstrates the potential of artificial intelligence to support visualisation and interaction in virtual reality.
ArXiv, 2021
We present PRETUS – a Plugin-based Real Time UltraSound software platform for live ultrasound ima... more We present PRETUS – a Plugin-based Real Time UltraSound software platform for live ultrasound image analysis and operator support. The software is lightweight; functionality is brought in via independent plug-ins that can be arranged in sequence. The software allows to capture the real-time stream of ultrasound images from virtually any ultrasound machine, applies computational methods and visualises the results on-the-fly. Plug-ins can run concurrently without blocking each other. They can be implemented in C++and Python. A graphical user interface can be implemented for each plug-in, and presented to the user in a compact way. The software is free and open source, and allows for rapid prototyping and testing of real-time ultrasound imaging methods in a manufacturer-agnostic fashion. The software is provided with input, output and processing plug-ins, as well as with tutorials to illustrate how to develop new plug-ins for PRETUS.
Three-dimensional (3D) medical images such as Computed Tomography (CT), Magnetic Resonance (MR), ... more Three-dimensional (3D) medical images such as Computed Tomography (CT), Magnetic Resonance (MR), and 3D Ultrasound (US) are normally visualised on 2D displays, resulting in sub-optimal perception and manipulation of these images. To solve this problem, the use of Virtual Reality (VR) and Augmented Reality (AR) with 3D medical images has been proposed and is currently an active field of research in areas such as surgical training [1] and planning [2], and extending existing tools into VR, such as 3D Slicer [3]. We are focusing on developing an intuitive application for imaging specialists to better communicate with non-imaging clinicians, for example surgeons, as well as patients and families. To this end, we recently proposed a framework for incorporating 3D medical images into VR by integrating the Visualization Toolkit (VTK – www.vtk.org) into Unity (unity3d.com) [4]. Here we present an initial clinical evaluation of a simple VR application to interrogate 3D medical images, and co...
Cardiac surgeons rely on diagnostic imaging for preoperative planning. Recently, developments hav... more Cardiac surgeons rely on diagnostic imaging for preoperative planning. Recently, developments have been made on improving 3D ultrasound (US) spatial compounding tailored for cardiac images. Compounded 3D ultrasound volumes are able to capture complex anatomical structures at a level similar to a CT scan, however these images are difficult to display and visualize due to an increased amount of surrounding tissue captured including excess noise at the volume boundaries. Traditional medical image visualization software does not easily allow for viewing 2D slices at arbitrary angles, and 3D rendering techniques do not adequately capture depth information without the use of advanced transfer functions or other depth-encoding techniques that must be tuned to each individual data set. Previous studies have shown that the effective use of virtual reality (VR) can improve image visualization, usability and reduce surgical errors in case planning. We demonstrate the novel use of a VR system f...
Journal of imaging, 2021
The intricate nature of congenital heart disease requires understanding of the complex, patient-s... more The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multius...
Multimodal interactions provide users with more natural ways to interact with virtual environment... more Multimodal interactions provide users with more natural ways to interact with virtual environments than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform virtual content selection and manipulation conveniently through the use of a combination of gaze and other hand control techniques/pointing devices, in this thesis, mid-air gestures. To establish a synergy between the two modalities and evaluate the affordance of this novel multimodal interaction technique, it is important to understand their behavioural patterns and relationship, as well as any possible perceptual conflicts and interactive ambiguities. More specifically, evidence shows that eye movements lead hand movements but the question remains that whether the leading relationship is similar when interacting using a pointing device. Moreover, as gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on users perc...
In recent years, digital storytelling has demonstrated powerful pedagogical functions by improvin... more In recent years, digital storytelling has demonstrated powerful pedagogical functions by improving creativity, collaboration and intimacy among young children. Saturated with digital media technologies in their daily lives, the young generation demands natural interactive learning environments which offer multimodalities of feedback and meaningful immersive learning experiences. Virtual puppetry assisted storytelling system for young children, which utilises depth motion sensing technology and gesture control as the Human-Computer Interaction (HCI) method, has been proved to provide natural interactive learning experience for single player. In this paper, we designed and developed a novel system that allows multiple players to narrate, and most importantly, to interact with other characters and interactive virtual items in the virtual environment. We have conducted one user experiment with four young children for pedagogical evaluation and another user experiment with five postgradu...
European Heart Journal - Cardiovascular Imaging
Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded ... more Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded 3D Heart Project Wellcome / EPSRC Centre for Medical Engineering (WT 203148/Z/16/Z) onbehalf 3D Heart Project Background/Introduction: In echocardiography (echo), image orientation is determined by the position and direction of the transducer during examination, unlike cardiovascular imaging modalities such as CT or MRI. As a result, when echo images are first shown their display orientation has no external anatomical landmarks, thus the user has to identify anatomical landmarks in the regions of interest to understand the orientation. Purpose To display an anatomical model of a standard heart, automatically aligned to an acquired patient’s 3D echo image - assisting image interpretation by quickly orienting the viewer. Methods 47 echo datasets from 13 pediatric patients with hypoplastic left heart syndrome (HLHS) were annotated by manually indicating the cardiac axes in both ES and E...
Neurocomputing
Abstract The rocketing number of photos shared on social media leads to the increasing demand for... more Abstract The rocketing number of photos shared on social media leads to the increasing demand for photo editing. We here focus on a specific scenario - group dinner photo and tackle two user-demanding problems - to add a person or replace the tabletop. The target objects are determined according to the saliency detection results. We developed a novel application to solve these problems. With our system, non-professional users can accomplish semantic editing within a few seconds, including inserting human and tidying up tabletops. Our system contributes to the state-of-the-art by 1) efficiently selecting the saliency area by its semantic meaning, 2) accurately compositing the salient content with the target image, based on the contextual knowledge. The context refers to the key factors, including occlusion and artifacts during the composition. The feedback from users shows that the authenticity of inserting human is satisfying. A comparative study shows that our system can more efficiently produce pictures with comparable quality as those edited by professional editing software in the tidying up tabletops work.
European Heart Journal - Cardiovascular Imaging
Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded ... more Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded 3D Heart project Wellcome/EPSRC Centre for Medical Engineering [WT 203148/Z/16/Z] onbehalf 3D Heart Project Background/Introduction: Virtual Reality (VR) for surgical and interventional planning in the treatment of Congenital Heart Disease (CHD) is an emerging field that has the potential to improve planning. Particularly in very complex cases, VR permits enhanced visualisation and more intuitive interaction of volumetric images, compared to traditional flat-screen visualisation tools. Blood flow is severely affected by CHD and, thus, visualisation of blood flow allows direct observation of the cardiac maladaptions for surgical planning. However, blood flow is fundamentally 3D information, and viewing and interacting with it using conventional 2D displays is suboptimal. Purpose To demonstrate feasibility of blood flow visualisation in VR using pressure and velocity obtained from a compu...
2017 3rd IEEE International Conference on Cybernetics (CYBCONF)
Inputs with multimodal information provide more natural ways to interact with virtual 3D environm... more Inputs with multimodal information provide more natural ways to interact with virtual 3D environment. An emerging technique that integrates gaze modulated pointing with mid-air gesture control enables fast target acquisition and rich control expressions. The performance of this technique relies on the eye tracking accuracy which is not comparable with the traditional pointing techniques (e.g., mouse) yet. This will cause troubles when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This paper proposes a coarse-tofine solution to compensate the degradation introduced by eye tracking inaccuracy using a gaze cone to detect ambiguity and then a gaze probe for decluttering. It is tested in a comparative experiment which involves 12 participants with 3240 runs. The results show that the proposed technique enhanced the selection accuracy and user experience but it is still with a potential to be improved in efficiency. This study contributes to providing a robust multimodal interface design supported by both eye tracking and mid-air gesture control.
European Journal of Echocardiography, 2020
Background: 3D printing is used for surgical planning of complex congenital heart disease (CHD) b... more Background: 3D printing is used for surgical planning of complex congenital heart disease (CHD) because it provides an intuitive 3D representation of the image data. However, the 3D print is static and it can be costly and time consuming to create. Virtual Reality (VR) is a cheaper alternative that is able to visualise volumetric images in 3D directly from the scanner, both statically (CT and MR) and dynamically (cardiac ultrasound). However, VR visualisation is not as tangible as a 3D print-this is because it lacks the haptic feedback which would make the interactions feel more natural. Purpose: Evaluate if adding haptic feedback (vibration) to the visualisation of volume image data in VR improves measurement accuracy and user experience. Method: We evaluated the effect of vibration haptic feedback in our VR system using a synthetic cylinder volume dataset. The cylinder was displayed in two conditions: (1) with no haptic feedback, and (2) with haptic feedback. Ten non-clinical participants volunteered in the evaluation. They were blinded to these two test conditions. The participants were asked to measure the cylinder's diameter horizontally and vertically, and its length, in each test condition. The measurement results were compared to the ground truth to assess the measurement accuracy. Each participant also completed a questionnaire comparing their experience of the two test conditions during the experiment. Results: The results show a marginal improvement of measurement accuracy with haptic feedback, compared to no haptics (see Figure a). However, this improvement was not statistically significant. The haptic feedback did improve the participants' confidence about their performance and increased the ease of use in VR, hence, they preferred the haptics condition to the no haptics condition (see Figure b). Moreover, although 70% of the participants reported relying on the visual cue more than on the haptic cue, 90% found that the haptic cue was helpful for deciding where to place the measurement point. Also, 88.9% of the participants felt more immersed in the VR scene with haptic feedback. Conclusion: Our evaluation suggests that although haptic feedback may only marginally improve measurement accuracy, participants nevertheless preferred it because it improved confidence in their performance, increased ease of use, and facilitated a more immersive user experience.
Ultrasound in Obstetrics & Gynecology, 2021
European Journal of Echocardiography, 2020
Background/Introduction Virtual Reality (VR) has recently gained great interest for examining 3D ... more Background/Introduction Virtual Reality (VR) has recently gained great interest for examining 3D images from congenital heart disease (CHD) patients. Currently, 3D printed models of the heart may be used for particularly complex cases. These have been found to be intuitive and to positively impact clinical decision-making. Although positively received, such printed models must be segmented from the image data, generally only CT/MR may be used, the prints are static, and models do not allow for cropping / slicing or easy manipulation. Our VR system is designed to address these issues, as well as providing a simple interface compared to standard software. Building such a VR system, one with intuitive interaction which is clinically useful, requires studying user acceptance and requirements. Purpose: We evaluate the usability of our VR system: can a prototype VR system be easily learned and used by clinicians unfamiliar with VR. Method: We tested a VR system which can display 3D echo images and enables the user to interact with them, for instance by translating, rotating and cropping. Our system is tested on a transoesophageal echocardiogram from a patient with aortic valve disease. 13 clinicians evaluated the system including 5 imaging cardiologists, 5 physiologists, 2 surgeons and an interventionist, with their clinical experience ranging from trainee to more than 5 years' of experience. None had used VR regularly in the past. After a brief training session, they were asked to place three anatomical landmarks and identify a particular cardiac view. They then completed a questionnaire on system ease of learning and image manipulation. Results: Results are shown in the figure below. Learning to use the system was perceived as easy for all but one participant, who rated it as 'Somewhat difficult'. However, once trained, all users found the system easy to use. Participants found the interaction, where objects in the scene are picked up using the controller and then track the controller's motion in a 1:1 way, to be particularly easy to learn and use. Conclusion: Our VR system was accepted by the vast majority of clinicians, both for ease of learning and use. Intuitiveness and the ability to interact with images in a natural way were highlighted as most useful-suggesting that such a system could become accepted for routine clinical use in the future.
The International Journal of Cardiovascular Imaging
Extended reality (XR), which encompasses virtual, augmented and mixed reality, is an emerging med... more Extended reality (XR), which encompasses virtual, augmented and mixed reality, is an emerging medical imaging display platform which enables intuitive and immersive interaction in a three-dimensional space. This technology holds the potential to enhance understanding of complex spatial relationships when planning and guiding cardiac procedures in congenital and structural heart disease moving beyond conventional 2D and 3D image displays. A systematic review of the literature demonstrates a rapid increase in publications describing adoption of this technology. At least 33 XR systems have been described, with many demonstrating proof of concept, but with no specific mention of regulatory approval including some prospective studies. Validation remains limited, and true clinical benefit difficult to measure. This review describes and critically appraises the range of XR technologies and its applications for procedural planning and guidance in structural heart disease while discussing th...
Medical Image Analysis
Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) hig... more Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra-and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
Journal of Imaging
This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line m... more This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line measurement tool using phantom data across three cardiac imaging modalities: three-dimensional echocardiography (3DE), computed tomography (CT) and magnetic resonance imaging (MRI). The same phantoms were also measured using industry-standard image visualisation software packages. Two participants performed blinded measurements on volume-rendered images of standard phantoms both in VR and on an industry-standard image visualisation platform. The intra- and interrater reliability of the VR measurement method was evaluated by intraclass correlation coefficient (ICC) and coefficient of variance (CV). Measurement accuracy was analysed using Bland–Altman and mean absolute percentage error (MAPE). VR measurements showed good intra- and interobserver reliability (ICC ≥ 0.99, p < 0.05; CV < 10%) across all imaging modalities. MAPE for VR measurements compared to ground truth were 1.6%, 1.6%...
Computer Graphics Forum, 2020
Pressed by a glass plate from above, an elastoplastic object filled with liquid undergoes plastic... more Pressed by a glass plate from above, an elastoplastic object filled with liquid undergoes plastic deformation, and the liquid inside breaks out due to the increasing pressure.
European Heart Journal - Cardiovascular Imaging, 2020
Funding Acknowledgements Work supported by the NIHR i4i funded 3D Heart project [II-LA-0716-20001... more Funding Acknowledgements Work supported by the NIHR i4i funded 3D Heart project [II-LA-0716-20001] Background/Introduction Cardiac measurements are clinically important and are invariably required in any clinical imaging software. The advent of Virtual Reality (VR) imaging systems is introducing intuitive and natural ways of visualising and interrogating echo images in a 3D environment. The 3D nature of the VR experience requires purpose-designed measurement tools, which may benefit from better depth perception and easier localisation of 3D landmarks. Purpose Comparison of the accuracy of our VR 3D linear measurement system to commercial clinical imaging software, using both multi-plane reformatting (MPR) and volume rendered views. Method Each virtual reality measurement was made by selecting two points in 3D, directly in the volume rendering. The participants could edit the measurements until satisfied with their accuracy. 5 expert clinicians carried out 26 measurements each - 6 me...
International Journal of Serious Games, 2014
The goal of this review is to illustrate the emerging use of multimodal virtual reality that can ... more The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications.
In 3D echocardiography (3D echo), the image orientation varies depending on the position and dire... more In 3D echocardiography (3D echo), the image orientation varies depending on the position and direction of the transducer during examination. As a result, when reviewing images the user must initially identify anatomical landmarks to understand image orientation-a potentially challenging and time-consuming task. We automated this initial step by training a deep residual neural network (ResNet) to predict the rotation required to reorient an image to the standard apical fourchamber view). Three data pre-processing strategies were explored: 2D, 2.5D and 3D. Three different loss function strategies were investigated: classification of discrete integer angles, regression with mean absolute angle error loss, and regression with geodesic loss. We then integrated the model into a virtual reality application and aligned the reoriented 3D echo images with a standard anatomical heart model. The deep learning strategy with the highest accuracy-2.5D classification of discrete integer angles-achieved a mean absolute angle error on the test set of 9.0 •. This work demonstrates the potential of artificial intelligence to support visualisation and interaction in virtual reality.
ArXiv, 2021
We present PRETUS – a Plugin-based Real Time UltraSound software platform for live ultrasound ima... more We present PRETUS – a Plugin-based Real Time UltraSound software platform for live ultrasound image analysis and operator support. The software is lightweight; functionality is brought in via independent plug-ins that can be arranged in sequence. The software allows to capture the real-time stream of ultrasound images from virtually any ultrasound machine, applies computational methods and visualises the results on-the-fly. Plug-ins can run concurrently without blocking each other. They can be implemented in C++and Python. A graphical user interface can be implemented for each plug-in, and presented to the user in a compact way. The software is free and open source, and allows for rapid prototyping and testing of real-time ultrasound imaging methods in a manufacturer-agnostic fashion. The software is provided with input, output and processing plug-ins, as well as with tutorials to illustrate how to develop new plug-ins for PRETUS.
Three-dimensional (3D) medical images such as Computed Tomography (CT), Magnetic Resonance (MR), ... more Three-dimensional (3D) medical images such as Computed Tomography (CT), Magnetic Resonance (MR), and 3D Ultrasound (US) are normally visualised on 2D displays, resulting in sub-optimal perception and manipulation of these images. To solve this problem, the use of Virtual Reality (VR) and Augmented Reality (AR) with 3D medical images has been proposed and is currently an active field of research in areas such as surgical training [1] and planning [2], and extending existing tools into VR, such as 3D Slicer [3]. We are focusing on developing an intuitive application for imaging specialists to better communicate with non-imaging clinicians, for example surgeons, as well as patients and families. To this end, we recently proposed a framework for incorporating 3D medical images into VR by integrating the Visualization Toolkit (VTK – www.vtk.org) into Unity (unity3d.com) [4]. Here we present an initial clinical evaluation of a simple VR application to interrogate 3D medical images, and co...
Cardiac surgeons rely on diagnostic imaging for preoperative planning. Recently, developments hav... more Cardiac surgeons rely on diagnostic imaging for preoperative planning. Recently, developments have been made on improving 3D ultrasound (US) spatial compounding tailored for cardiac images. Compounded 3D ultrasound volumes are able to capture complex anatomical structures at a level similar to a CT scan, however these images are difficult to display and visualize due to an increased amount of surrounding tissue captured including excess noise at the volume boundaries. Traditional medical image visualization software does not easily allow for viewing 2D slices at arbitrary angles, and 3D rendering techniques do not adequately capture depth information without the use of advanced transfer functions or other depth-encoding techniques that must be tuned to each individual data set. Previous studies have shown that the effective use of virtual reality (VR) can improve image visualization, usability and reduce surgical errors in case planning. We demonstrate the novel use of a VR system f...
Journal of imaging, 2021
The intricate nature of congenital heart disease requires understanding of the complex, patient-s... more The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multius...
Multimodal interactions provide users with more natural ways to interact with virtual environment... more Multimodal interactions provide users with more natural ways to interact with virtual environments than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform virtual content selection and manipulation conveniently through the use of a combination of gaze and other hand control techniques/pointing devices, in this thesis, mid-air gestures. To establish a synergy between the two modalities and evaluate the affordance of this novel multimodal interaction technique, it is important to understand their behavioural patterns and relationship, as well as any possible perceptual conflicts and interactive ambiguities. More specifically, evidence shows that eye movements lead hand movements but the question remains that whether the leading relationship is similar when interacting using a pointing device. Moreover, as gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on users perc...
In recent years, digital storytelling has demonstrated powerful pedagogical functions by improvin... more In recent years, digital storytelling has demonstrated powerful pedagogical functions by improving creativity, collaboration and intimacy among young children. Saturated with digital media technologies in their daily lives, the young generation demands natural interactive learning environments which offer multimodalities of feedback and meaningful immersive learning experiences. Virtual puppetry assisted storytelling system for young children, which utilises depth motion sensing technology and gesture control as the Human-Computer Interaction (HCI) method, has been proved to provide natural interactive learning experience for single player. In this paper, we designed and developed a novel system that allows multiple players to narrate, and most importantly, to interact with other characters and interactive virtual items in the virtual environment. We have conducted one user experiment with four young children for pedagogical evaluation and another user experiment with five postgradu...
European Heart Journal - Cardiovascular Imaging
Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded ... more Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded 3D Heart Project Wellcome / EPSRC Centre for Medical Engineering (WT 203148/Z/16/Z) onbehalf 3D Heart Project Background/Introduction: In echocardiography (echo), image orientation is determined by the position and direction of the transducer during examination, unlike cardiovascular imaging modalities such as CT or MRI. As a result, when echo images are first shown their display orientation has no external anatomical landmarks, thus the user has to identify anatomical landmarks in the regions of interest to understand the orientation. Purpose To display an anatomical model of a standard heart, automatically aligned to an acquired patient’s 3D echo image - assisting image interpretation by quickly orienting the viewer. Methods 47 echo datasets from 13 pediatric patients with hypoplastic left heart syndrome (HLHS) were annotated by manually indicating the cardiac axes in both ES and E...
Neurocomputing
Abstract The rocketing number of photos shared on social media leads to the increasing demand for... more Abstract The rocketing number of photos shared on social media leads to the increasing demand for photo editing. We here focus on a specific scenario - group dinner photo and tackle two user-demanding problems - to add a person or replace the tabletop. The target objects are determined according to the saliency detection results. We developed a novel application to solve these problems. With our system, non-professional users can accomplish semantic editing within a few seconds, including inserting human and tidying up tabletops. Our system contributes to the state-of-the-art by 1) efficiently selecting the saliency area by its semantic meaning, 2) accurately compositing the salient content with the target image, based on the contextual knowledge. The context refers to the key factors, including occlusion and artifacts during the composition. The feedback from users shows that the authenticity of inserting human is satisfying. A comparative study shows that our system can more efficiently produce pictures with comparable quality as those edited by professional editing software in the tidying up tabletops work.
European Heart Journal - Cardiovascular Imaging
Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded ... more Funding Acknowledgements Type of funding sources: Other. Main funding source(s): NIHR i4i funded 3D Heart project Wellcome/EPSRC Centre for Medical Engineering [WT 203148/Z/16/Z] onbehalf 3D Heart Project Background/Introduction: Virtual Reality (VR) for surgical and interventional planning in the treatment of Congenital Heart Disease (CHD) is an emerging field that has the potential to improve planning. Particularly in very complex cases, VR permits enhanced visualisation and more intuitive interaction of volumetric images, compared to traditional flat-screen visualisation tools. Blood flow is severely affected by CHD and, thus, visualisation of blood flow allows direct observation of the cardiac maladaptions for surgical planning. However, blood flow is fundamentally 3D information, and viewing and interacting with it using conventional 2D displays is suboptimal. Purpose To demonstrate feasibility of blood flow visualisation in VR using pressure and velocity obtained from a compu...
2017 3rd IEEE International Conference on Cybernetics (CYBCONF)
Inputs with multimodal information provide more natural ways to interact with virtual 3D environm... more Inputs with multimodal information provide more natural ways to interact with virtual 3D environment. An emerging technique that integrates gaze modulated pointing with mid-air gesture control enables fast target acquisition and rich control expressions. The performance of this technique relies on the eye tracking accuracy which is not comparable with the traditional pointing techniques (e.g., mouse) yet. This will cause troubles when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This paper proposes a coarse-tofine solution to compensate the degradation introduced by eye tracking inaccuracy using a gaze cone to detect ambiguity and then a gaze probe for decluttering. It is tested in a comparative experiment which involves 12 participants with 3240 runs. The results show that the proposed technique enhanced the selection accuracy and user experience but it is still with a potential to be improved in efficiency. This study contributes to providing a robust multimodal interface design supported by both eye tracking and mid-air gesture control.