Detecting Anatomical Landmarks for Motion Estimation in Weight-Bearing Imaging of Knees (original) (raw)
Related papers
Automatic landmark detection and mapping for 2D/3D registration with BoneNet
Frontiers in Veterinary Science
The 3D musculoskeletal motion of animals is of interest for various biological studies and can be derived from X-ray fluoroscopy acquisitions by means of image matching or manual landmark annotation and mapping. While the image matching method requires a robust similarity measure (intensity-based) or an expensive computation (tomographic reconstruction-based), the manual annotation method depends on the experience of operators. In this paper, we tackle these challenges by a strategic approach that consists of two building blocks: an automated 3D landmark extraction technique and a deep neural network for 2D landmarks detection. For 3D landmark extraction, we propose a technique based on the shortest voxel coordinate variance to extract the 3D landmarks from the 3D tomographic reconstruction of an object. For 2D landmark detection, we propose a customized ResNet18-based neural network, BoneNet, to automatically detect geometrical landmarks on X-ray fluoroscopy images. With a deeper n...
i3PosNet: instrument pose estimation from X-ray in temporal bone surgery
International Journal of Computer Assisted Radiology and Surgery, 2020
Purpose Accurate estimation of the position and orientation (pose) of surgical instruments is crucial for delicate minimally invasive temporal bone surgery. Current techniques lack in accuracy and/or line-of-sight constraints (conventional tracking systems) or expose the patient to prohibitive ionizing radiation (intra-operative CT). A possible solution is to capture the instrument with a c-arm at irregular intervals and recover the pose from the image. Methods i3PosNet infers the position and orientation of instruments from images using a pose estimation network. Said framework considers localized patches and outputs pseudo-landmarks. The pose is reconstructed from pseudo-landmarks by geometric considerations. Results We show i3PosNet reaches errors <\,0.05$$<0.05 mm. It outperforms conventional image registration-based approaches reducing average and maximum errors by at least two thirds. i3PosNet trained on synthetic images generalizes to real X-rays without any further a...
KNEEL: Knee Anatomical Landmark Localization Using Hourglass Networks
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019
This paper addresses the challenge of localization of anatomical landmarks in knee X-ray images at different stages of osteoarthritis (OA). Landmark localization can be viewed as regression problem, where the landmark position is directly predicted by using the region of interest or even full-size images leading to large memory footprint, especially in case of high resolution medical images. In this work, we propose an efficient deep neural networks framework with an hourglass architecture utilizing a soft-argmax layer to directly predict normalized coordinates of the landmark points. We provide an extensive evaluation of different regularization techniques and various loss functions to understand their influence on the localization performance. Furthermore, we introduce the concept of transfer learning from low-budget annotations, and experimentally demonstrate that such approach is improving the accuracy of landmark localization. Compared to the prior methods, we validate our model on two datasets that are independent from the train data and assess the performance of the method for different stages of OA severity. The proposed approach demonstrates better generalization performance compared to the current state-of-the-art.
Range Imaging for Motion Compensation in C-Arm Cone-Beam CT of Knees under Weight-Bearing Conditions
Journal of Imaging
C-arm cone-beam computed tomography (CBCT) has been used recently to acquire images of the human knee joint under weight-bearing conditions to assess knee joint health under load. However, involuntary patient motion during image acquisition leads to severe motion artifacts in the subsequent reconstructions. The state-of-the-art uses fiducial markers placed on the patient's knee to compensate for the induced motion artifacts. The placement of markers is time consuming, tedious, and requires user experience, to guarantee reliable motion estimates. To overcome these drawbacks, we recently investigated whether range imaging would allow to track, estimate, and compensate for patient motion using a range camera. We argue that the dense surface information observed by the camera could reveal more information than only a few surface points of the marker-based method. However, the integration of range-imaging with CBCT involves flexibility, such as where to position the camera and what algorithm to align the data with. In this work, three dimensional rigid body motion is estimated for synthetic data acquired with two different range camera trajectories: a static position on the ground and a dynamic position on the C-arm. Motion estimation is evaluated using two different types of point cloud registration algorithms: a pair wise Iterative Closest Point algorithm as well as a probabilistic group wise method. We compare the reconstruction results and the estimated motion signals with the ground truth and the current reference standard, a marker-based approach. To this end, we qualitatively and quantitatively assess image quality. The latter is evaluated using the Structural Similarity (SSIM). We achieved results comparable to the marker-based approach, which highlights the potential of both point set registration methods, for accurately recovering patient motion. The SSIM improved from 0.94 to 0.99 and 0.97 using the static and the dynamic camera trajectory, respectively. Accurate recovery of patient motion resulted in remarkable reduction in motion artifacts in the CBCT reconstructions, which is promising for future work with real data.
2017
Recently, C-arm cone-beam CT systems have been used to acquire knee joints under weight-bearing conditions. For this purpose, the Carm acquires images on a horizontal trajectory around the standing patient, who shows involuntary motion. The current state-of-the-art reconstruction approach estimates motion based on fiducial markers attached to the knee. A drawback is that this method requires calibration prior to each scan, since the horizontal trajectory is not reproducible. In this work, we propose a novel method, which does not need a calibration scan. For comparison, we extended the stateof-the-art method with an iterative scheme and we further introduce a closed-form solution of the compensated projection matrices. For evaluation, a numerical phantom and clinical data are used. The novel approach and the extended state-of-the-art method achieve a reduction of the reprojection error of 94% for the phantom data. The improvement for the clinical data ranged between 10% and 80%, which is followed by the visual impression. Therefore, the novel approach and the extended state-of-the-art method achieve superior results compared to the state-of-the-art method.
2-STEP Deep Learning Model for Landmarks Localization in Spine Radiographs
Scientific Reports
In this work we propose to use Deep Learning to automatically calculate the coordinates of the vertebral corners in sagittal x-rays images of the thoracolumbar spine and, from those landmarks, to calculate relevant radiological parameters such as L1–L5 and L1–S1 lordosis and sacral slope. For this purpose, we used 10,193 images annotated with the landmarks coordinates as the ground truth. We realized a model that consists of 2 steps. In step 1, we trained 2 Convolutional Neural Networks to identify each vertebra in the image and calculate the landmarks coordinates respectively. In step 2, we refined the localization using cropped images of a single vertebra as input to another convolutional neural network and we used geometrical transformations to map the corners to the original image. For the localization tasks, we used a differentiable spatial to numerical transform (DSNT) as the top layer. We evaluated the model both qualitatively and quantitatively on a set of 195 test images. T...
Automatic Plane Adjustment of Orthopedic Intraoperative Flat Panel Detector CT-Volumes
ArXiv, 2020
Flat panel computed tomography is used intraoperatively to assess the result of surgery. Due to workflow issues, the acquisition typically cannot be carried out in such a way that the axis aligned multiplanar reconstructions (MPR) of the volume match the anatomically aligned MPRs. This needs to be performed manually, adding additional effort during viewing the datasets. A PoseNet convolutional neural network (CNN) is trained such that parameters of anatomically aligned MPR planes are regressed. Different mathematical approaches to describe plane rotation are compared, as well as a cost function is optimized to incorporate orientation constraints. The CNN is evaluated on two anatomical regions. For one of these regions, one plane is not orthogonal to the other two planes. The plane's normal can be estimated with a median accuracy of 5°, the in-plane rotation with an accuracy of 6°, and the position with an accuracy of 6 mm. Compared to state-of-the-art algorithms the labeling eff...
Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2D/3D registration
International Journal of Computer Assisted Radiology and Surgery, 2020
Purpose Fluoroscopy is the standard imaging modality used to guide hip surgery and is therefore a natural sensor for computer-assisted navigation. In order to efficiently solve the complex registration problems presented during navigation, human-assisted annotations of the intraoperative image are typically required. This manual initialization interferes with the surgical workflow and diminishes any advantages gained from navigation. In this paper we propose a method for fully automatic registration using anatomical annotations produced by a neural network. Methods Neural networks are trained to simultaneously segment anatomy and identify landmarks in fluoroscopy. Training data is obtained using a computationallyintensive, intraoperatively incompatible, 2D/3D registration of the pelvis and each femur. Ground truth 2D segmentation labels and anatomical landmark locations are established using projected 3D annotations. Intraoperative registration couples a traditional intensity
Inertial Measurements for Motion Compensation in Weight-Bearing Cone-Beam CT of the Knee
Lecture Notes in Computer Science, 2020
Involuntary motion during weight-bearing cone-beam computed tomography (CT) scans of the knee causes artifacts in the reconstructed volumes making them unusable for clinical diagnosis. Currently, image-based or marker-based methods are applied to correct for this motion, but often require long execution or preparation times. We propose to attach an inertial measurement unit (IMU) containing an accelerometer and a gyroscope to the leg of the subject in order to measure the motion during the scan and correct for it. To validate this approach, we present a simulation study using real motion measured with an optical 3D tracking system. With this motion, an XCAT numerical knee phantom is non-rigidly deformed during a simulated CT scan creating motion corrupted projections. A biomechanical model is animated with the same tracked motion in order to generate measurements of an IMU placed below the knee. In our proposed multi-stage algorithm, these signals are transformed to the global coordinate system of the CT scan and applied for motion compensation during reconstruction. Our proposed approach can effectively reduce motion artifacts in the reconstructed volumes. Compared to the motion corrupted case, the average structural similarity index and root mean squared error with respect to the no-motion case improved by 13-21% and 68-70%, respectively. These results are qualitatively and quantitatively on par with a state-of-theart marker-based method we compared our approach to. The presented study shows the feasibility of this novel approach, and yields promising results towards a purely IMU-based motion compensation in C-arm CT.