Jun Lian - Profile on Academia.edu (original) (raw)

Papers by Jun Lian

Research paper thumbnail of Estimating the 4D Respiratory Lung Motion by Spatiotemporal Registration and Building Super-Resolution Image

Springer eBooks, 2011

The estimation of lung motion in 4D-CT with respect to the respiratory phase becomes more and mor... more The estimation of lung motion in 4D-CT with respect to the respiratory phase becomes more and more important for radiation therapy of lung cancer. Modern CT scanner can only scan a limited region of body at each couch table position. Thus, motion artifacts due to the patient's free breathing during scan are often observable in 4D-CT, which could undermine the procedure of correspondence detection in the registration. Another challenge of motion estimation in 4D-CT is how to keep the lung motion consistent over time. However, the current approaches fail to meet this requirement since they usually register each phase image to a pre-defined phase image independently, without considering the temporal coherence in 4D-CT. To overcome these limitations, we present a unified approach to estimate the respiratory lung motion with two iterative steps. First, we propose a new spatiotemporal registration algorithm to align all phase images of 4D-CT (in low-resolution) onto a high-resolution group-mean image in the common space. The temporal consistency is persevered by introducing the concept of temporal fibers for delineating the spatiotemporal behavior of lung motion along the respiratory phase. Second, the idea of super resolution is utilized to build the group-mean image with more details, by integrating the highly-redundant image information contained in the multiple respiratory phases. Accordingly, by establishing the correspondence of each phase image w.r.t. the high-resolution group-mean image, the difficulty of detecting correspondences between original phase images with missing structures is greatly alleviated, thus more accurate registration results can be achieved. The performance of our proposed 4D motion estimation method has been extensively evaluated on a public lung dataset. In all experiments, our method achieves more accurate and consistent results in lung motion estimation than all other state-of-the-art approaches. MI MP ME MI MP ME Before super resolution After super resolution

Research paper thumbnail of Dual Shape Guided Segmentation Network for Organs-at-Risk in Head and Neck CT Images

arXiv (Cornell University), Oct 23, 2021

The accurate segmentation of organs-at-risk (OARs) in head and neck CT images is a critical step ... more The accurate segmentation of organs-at-risk (OARs) in head and neck CT images is a critical step for radiation therapy of head and neck cancer patients. However, manual delineation for numerous OARs is time-consuming and laborious, even for expert oncologists. Moreover, manual delineation results are susceptible to high intra-and inter-variability. To this end, we propose a novel dual shape guided network (DSGnet) to automatically delineate nine important OARs in head and neck CT images. To deal with the large shape variation and unclear boundary of OARs in CT images, we represent the organ shape using an organ-specific unilateral inverse-distance map (UIDM) and guide the segmentation task from two different perspectives: direct shape guidance by following the segmentation prediction and across shape guidance by sharing the segmentation feature. In the direct shape guidance, the segmentation prediction is not only supervised by the true label mask, but also by the true UIDM, which is implemented through a simple yet effective encoder-decoder mapping from the label space to the distance space. In the across shape guidance, UIDM is used to facilitate the segmentation by optimizing the shared feature maps. For the experiments, we build a large head and neck CT dataset with a total of 699 images from different volunteers, and conduct comprehensive experiments and comparisons with other state-of-theart methods to justify the effectiveness and efficiency of our proposed method. The overall Dice Similarity Coefficient (DSC) value of 0.842 across the nine important OARs demonstrates great potential applications in improving the delineation quality and reducing the time cost.

Research paper thumbnail of Asymmetrical Multi-task Attention U-Net for the Segmentation of Prostate Bed in CT Image

Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, 2020

Segmentation of the prostate bed, the residual tissue after the removal of the prostate gland, is... more Segmentation of the prostate bed, the residual tissue after the removal of the prostate gland, is an essential prerequisite for post-prostatectomy radiotherapy but also a challenging task due to its non-contrast boundaries and highly variable shapes relying on neighboring organs. In this work, we propose a novel deep learning-based method to automatically segment this "invisible target". As the main idea of our design, we expect to get reference from the surrounding normal structures (bladder&rectum) and take advantage of this information to facilitate the prostate bed segmentation. To achieve this goal, we first use a U-Net as the backbone network to perform the bladder&rectum segmentation, which serves as a low-level task that can provide references to the high-level task of the prostate bed segmentation. Based on the backbone network, we build a novel attention network with a series of cascaded attention modules to further extract discriminative features for the high-level prostate bed segmentation task. Since the attention network has onesided dependency on the backbone network, simulating the clinical workflow to use normal structures to guide the segmentation of radiotherapy target, we name the final composition model asymmetrical multi-task attention U-Net. Extensive experiments on a clinical dataset consisting of 186 CT images demonstrate the effectiveness of this new design and the superior performance of the model in comparison to the conventional atlas-based methods for prostate bed segmentation. The source code is publicly available at .

Research paper thumbnail of Iterative Label Denoising Network: Segmenting Male Pelvic Organs in CT From 3D Bounding Box Annotations

IEEE Transactions on Biomedical Engineering, 2020

Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rect... more Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rectum) in CT images is critical for radiotherapy of prostate cancer. Currently, the leading automatic segmentation algorithms are based on Fully Convolutional Networks (FCNs), which achieve remarkable performance but usually need large-scale datasets with high-quality voxel-wise annotations for full supervision of the training. Unfortunately, such annotations are difficult to acquire, which becomes a bottleneck to build accurate segmentation models in real clinical applications. In this paper, we propose a novel weakly supervised segmentation approach that only needs 3D bounding box annotations covering the organs of interest to start the training. Obviously, the bounding box includes many non-organ voxels that carry noisy labels to mislead the

Research paper thumbnail of STRAINet: Spatially Varying sTochastic Residual AdversarIal Networks for MRI Pelvic Organ Segmentation

IEEE transactions on neural networks and learning systems, Jan 9, 2018

Accurate segmentation of pelvic organs is important for prostate radiation therapy. Modern radiat... more Accurate segmentation of pelvic organs is important for prostate radiation therapy. Modern radiation therapy starts to use a magnetic resonance image (MRI) as an alternative to computed tomography image because of its superior soft tissue contrast and also free of risk from radiation exposure. However, segmentation of pelvic organs from MRI is a challenging problem due to inconsistent organ appearance across patients and also large intrapatient anatomical variations across treatment days. To address such challenges, we propose a novel deep network architecture, called ``Spatially varying sTochastic Residual AdversarIal Network'' (STRAINet), to delineate pelvic organs from MRI in an end-to-end fashion. Compared to the traditional fully convolutional networks (FCN), the proposed architecture has two main contributions: 1) inspired by the recent success of residual learning, we propose an evolutionary version of the residual unit, i.e., stochastic residual unit, and use it to t...

Research paper thumbnail of Joint Learning of Image Regressor and Classifier for Deformable Segmentation of CT Pelvic Organs

Lecture Notes in Computer Science, 2015

The segmentation of pelvic organs from CT images is an essential step for prostate radiation ther... more The segmentation of pelvic organs from CT images is an essential step for prostate radiation therapy. However, due to low tissue contrast and large anatomical variations, it is still challenging to accurately segment these organs from CT images. Among various existing methods, deformable models gain popularity as it is easy to incorporate shape priors to regularize the final segmentation. Despite this advantage, the sensitivity to the initialization is often a pain for deformable models. In this paper, we propose a novel way to guide deformable segmentation, which could greatly alleviate the problem caused by poor initialization. Specifically, random forest is adopted to jointly learn image regressor and classifier for each organ. The image regressor predicts the 3D displacement from any image voxel to the organ boundary based on the local appearance of this voxel. It is used as an external force to drive each vertex of deformable model (3D mesh) towards the target organ boundary. Once the deformable model is close to the boundary, the organ likelihood map, provided by the learned classifier, is used to further refine the segmentation. In the experiments, we applied our method to segmenting prostate, bladder and rectum from planning CT images. Experimental results show that our method can achieve competitive performance over existing methods, even with very rough initialization.

Research paper thumbnail of SU-E-T-592: Comparison of Low Dose Volume and Integral Dose in Rotational Arc Radiation Therapy Modalities

Medical Physics, 2012

Innovation: Helical tomotherapy (HT) is known for generating larger low dose volume and higher in... more Innovation: Helical tomotherapy (HT) is known for generating larger low dose volume and higher integral dose than fixed gantry IMRT with conventional number of beams. It is unclear if this is still true and what magnitude is when comparing tomotherapy, especially the thin 1 cm jaw treatment plan, with other volumetric modulated arc therapy modalities (VMAT).

Research paper thumbnail of Estimating the 4D respiratory lung motion by spatiotemporal registration and super-resolution image reconstruction

Carolina Digital Repository (University of North Carolina at Chapel Hill), 2013

One of the main challenges in lung cancer radiation therapy is how to reduce the treatment margin... more One of the main challenges in lung cancer radiation therapy is how to reduce the treatment margin but accommodate the geometric uncertainty of moving tumor. 4D-CT is able to provide the full range of motion information for the lung and tumor. However, accurate estimation of lung motion with respect to the respiratory phase is difficult due to various challenges in image registration, e.g., motion artifacts and large interslice thickness in 4D-CT. Meanwhile, the temporal coherence across respiration phases is usually not guaranteed in the conventional registration methods which consider each phase image in 4D-CT independently. To address these challenges, the authors present a unified approach to estimate the respiratory lung motion with two iterative steps. Methods: First, the authors propose a novel spatiotemporal registration algorithm to align all phase images of 4D-CT (in low-resolution) to a high-resolution group-mean image in the common space. The temporal coherence of registration is maintained by a set of temporal fibers that delineate temporal correspondences across different respiratory phases. Second, a super-resolution technique is utilized to build the high-resolution group-mean image with more anatomical details than any individual phase image, thus largely alleviating the registration uncertainty especially in correspondence detection. In particular, the authors use the concept of sparse representation to keep the group-mean image as sharp as possible. Results: The performance of our 4D motion estimation method has been extensively evaluated on both the simulated datasets and real lung 4D-CT datasets. In all experiments, our method achieves more accurate and consistent results in lung motion estimation than all other state-of-the-art approaches under comparison. Conclusions: The authors have proposed a novel spatiotemporal registration method to estimate the lung motion in 4D-CT. Promising results have been obtained, which indicates the high applicability of our method in clinical lung cancer radiation therapy.

Research paper thumbnail of Improving image-guided radiation therapy of lung cancer by reconstructing 4D-CT from a single free-breathing 3D-CT on the treatment day: Treatment day 4D-CT reconstruction

Carolina Digital Repository (University of North Carolina at Chapel Hill), 2012

Purpose: One of the major challenges of lung cancer radiation therapy is how to reduce the margin... more Purpose: One of the major challenges of lung cancer radiation therapy is how to reduce the margin of treatment field but also manage geometric uncertainty from respiratory motion. To this end, 4D-CT imaging has been widely used for treatment planning by providing a full range of respiratory motion for both tumor and normal structures. However, due to the considerable radiation dose and the limit of resource and time, typically only a free-breathing 3D-CT image is acquired on the treatment day for image-guided patient setup, which is often determined by the image fusion of the free-breathing treatment and planning day 3D-CT images. Since individual slices of two free breathing 3D-CTs are possibly acquired at different phases, two 3D-CTs often look different, which makes the image registration very challenging. This uncertainty of pretreatment patient setup requires a generous margin of radiation field in order to cover the tumor sufficiently during the treatment. In order to solve this problem, our main idea is to reconstruct the 4D-CT (with full range of tumor motion) from a single free-breathing 3D-CT acquired on the treatment day. Methods: We first build a super-resolution 4D-CT model from a low-resolution 4D-CT on the planning day, with the temporal correspondences also established across respiratory phases. Next, we propose a 4D-to-3D image registration method to warp the 4D-CT model to the treatment day 3D-CT while also accommodating the new motion detected on the treatment day 3D-CT. In this way, we can more precisely localize the moving tumor on the treatment day. Specifically, since the free-breathing 3D-CT is actually the mixed-phase image where different slices are often acquired at different respiratory phases, we first determine the optimal phase for each local image patch in the free-breathing 3D-CT to obtain a sequence of partial 3D-CT images (with incomplete image data at each phase) for the treatment day. Then we reconstruct a new 4D-CT for the treatment day by registering the 4D-CT of the planning day (with complete information) to the sequence of partial 3D-CT images of the treatment day, under the guidance of the 4D-CT model built on the planning day. Results: We first evaluated the accuracy of our 4D-CT model on a set of lung 4D-CT images with manually labeled landmarks, where the maximum error in respiratory motion estimation can be reduced from 6.08 mm by diffeomorphic Demons to 3.67 mm by our method. Next, we evaluated our proposed 4D-CT reconstruction algorithm on both simulated and real free-breathing images. The reconstructed 4D-CT using our algorithm shows clinically acceptable accuracy and could be used to guide a more accurate patient setup than the conventional method. Conclusions: We have proposed a novel two-step method to reconstruct a new 4D-CT from a single free-breathing 3D-CT on the treatment day. Promising reconstruction results imply the possible application of this new algorithm in the image guided radiation therapy of lung cancer.

Research paper thumbnail of Boundary Coding Representation for Organ Segmentation in Prostate Cancer Radiotherapy

IEEE Transactions on Medical Imaging, 2021

Accurate segmentation of the prostate and organs at risk (OARs, e.g., bladder and rectum) in male... more Accurate segmentation of the prostate and organs at risk (OARs, e.g., bladder and rectum) in male pelvic CT images is a critical step for prostate cancer radiotherapy. Unfortunately, the unclear organ boundary and large shape variation make the segmentation task very challenging. Previous studies usually used representations defined directly on unclear boundaries as context information to guide segmentation. Those boundary representations may not be so discriminative, resulting in limited performance improvement. To this end, we propose a novel boundary coding network (BCnet) to learn a discriminative representation for organ boundary and use it as the context information to guide the segmentation. Specifically, we design a two-stage learning strategy in the proposed BCnet: 1) Boundary coding representation learning. Two sub-networks under the supervision of the dilation and erosion masks transformed from the manually delineated organ mask are first separately trained to learn the spatial-semantic context near the organ boundary. Then we encode the organ boundary based on the predictions of these two sub-networks and design a multi-atlas based refinement strategy by transferring the knowledge from training data to inference. 2) Organ segmentation. The boundary coding representation as context information, in addition to the image patches, are used to train the final segmentation network. Experimental results on a large and diverse male pelvic CT dataset show that our method achieves superior performance compared with several state-of-the-art methods.

Research paper thumbnail of Compensation of intrafractional motion for lung stereotactic body radiotherapy (SBRT) on helical TomoTherapy

Biomedical Physics & Engineering Express, 2019

Helical TomoTherapy has unique challenges in handling intrafractional motion compared to a 30 con... more Helical TomoTherapy has unique challenges in handling intrafractional motion compared to a 30 conventional LINAC. In this study, we analyzed the impact of intrafractional motion on cumulative dosimetry using actual patient motion data from clinically treated patients and investigated real time jaw and multileaf collimator (MLC) compensation approaches to minimize the motioninduced dose discrepancy in clinically acceptable TomoTherapy lung SBRT treatments. Intrafractional motion traces from eight fiducial tracking CyberKnife lung tumor treatment cases 35 were used in this study. These cases were re-planned on TomoTherapy for SBRT, with 18 Gy × 3 fractions to a planning target volume (PTV) defined on the breath-hold CT without ITV expansion. Each case was planned with four different jaw settings: 1 cm static, 2.5 cm static, 2.5 cm dynamic and 5 cm dynamic. In-house 4D dose accumulation software was used to compute the dose distributions with tumor motion and then compensate for that motion by modifying the 40 original jaw and MLC positions to track the trajectory of the tumor. The impact of motion and effectiveness of compensation on the PTV coverage depends on the motion type and plan settings. On average, the PTV V100% (the percent volume of the PTV receiving the prescription dose) accumulated from three fractions changed from 96.6% (motion-free) to 83.1% (motionincluded), 97.5% to 93.0%, 97.7% to 92.1%, and 98.1% to 93.7% for the 1 cm static jaw, 2.5 cm 45 static jaw, 2.5 cm dynamic jaw and 5 cm dynamic jaw setting, respectively. When the jaw and MLC compensation algorithm was engaged, the PTV V100% was restored back to 92.2%, 95.9%, 96.6% and 96.4%, for the four jaw settings mentioned above respectively. TomoTherapy lung tumor SBRT treatments using a field width of 2.5 cm or larger are less sensitive to motion than treatments using a 1 cm field width. For 1 cm field width plans, PTV coverage can be greatly 50

Research paper thumbnail of Reconstructing Tissue Properties From Medical Images With Application in Cancer Screening

IEEE Transactions on Medical Robotics and Bionics, 2019

In this paper, we describe a method for recovering the tissue properties directly from medical im... more In this paper, we describe a method for recovering the tissue properties directly from medical images and study the correlation of tissue (i.e. prostate) elasticity with the aggressiveness of prostate cancer using medical image analysis. Methods: We present a novel method that uses geometric and physical constraints to deduce the relative tissue elasticity parameters. Although elasticity reconstruction, or elastograph, can be used to estimate tissue elasticity, it is less suited for invivo measurements or deeply seated organs like prostate. We develop a method to estimate tissue elasticity values based on pairs of images, using a finite-element based biomechanical model derived from an initial set of images, local displacements and an optimization-based framework. Results: We demonstrate the feasibility of a statistically based classifier that automatically provides a clinical T-stage and Gleason score based on the elasticity values reconstructed from computed tomography (CT) images. Conclusions: We study the relative elasticity parameters by performing cancer Grading/Staging prediction and achieve up to 85% accuracy for cancer Staging prediction and up to 77% accuracy for cancer Grading prediction using feature set which includes recovered relative elasticity parameters and patient age information.

Research paper thumbnail of Medical Image Synthesis with Deep Convolutional Adversarial Networks

IEEE transactions on bio-medical engineering, Jan 9, 2018

Medical imaging plays a critical role in various clinical applications. However, due to multiple ... more Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model (ACM) to implement a context...

Research paper thumbnail of Medical Image Synthesis with Context-Aware Generative Adversarial Networks

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention, 2017

Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment... more Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment planning and also PET attenuation correction in MRI/PET scanner. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve radiations. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiation planning. In this paper, we propose a data-driven approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate CT given the MR image. To better model the nonlinear mapping from MRI to CT and produce more realistic images, we propose to use the adversarial training strategy to train the FCN. Moreover, we propose an image-gradient-difference based loss function to alleviate the blurriness of the generated CT. We further apply Auto-Cont...

Research paper thumbnail of Comparison of Tumor Volume Delineation on Magnetic Resonance/Positron Emission Tomography Versus Standard Computed Tomography for Head and Neck Cancer: Is There Added Value?

International Journal of Radiation Oncology*Biology*Physics, 2016

The objective of this study is to evaluate weekly primary tumor regression rates (PTRR) and nodal... more The objective of this study is to evaluate weekly primary tumor regression rates (PTRR) and nodal tumor regression rates (NTRR) of head and neck cancers (HNC) during radiation (RT) as a prognostic indicator of oncologic outcomes and survival. Image guided radiation therapy (IGRT), specifically computed tomography (CT)-on-Rails (CToR), increases the accuracy of daily RT and additionally affords the opportunity for intratreatment response evaluation. Materials/Methods: A single-institution retrospective review from 2008 to 2013 was completed for patients with HNC who received RT with CToR. Forty-three patients with 70 measurable targets, 43 primary lesions and 27 metastatic lymph nodes, met inclusion criteria. Patients without radiographically evident primary tumors and those with surgical intervention prior to RT were excluded. Results: The analysis included 43 patients with a median age of 56 years (21-78), and 91% of them were male. The majory of patients were diagnosed with oropharynx cancers (63%), 26% nasopharynx and 11% sinonasal. Fifty-eight percent of patients received definitive chemoradiation, and 26% received induction followed by chemoradiation. The mean primary and nodal gross tumor volume (GTV) pre-RT was 38.5 mL (standard deviation [SD] 34.9) and 13.6 mL (SD 10.3), respectively. PTRR of 25% at fraction 15 (nZ26) was associated with superior 5-year local control (LC) (100% vs 70%, PZ.003), relapse-free survival (RFS; 84% vs 45%, PZ.007), and overall survival (OS; 88% vs 50%, PZ.005). On both univariate and multivariate analysis, PTRR <25% was associated with increased hazard of local failure compared to 25% (PZ.001 for both). There was not a strong independent correlation of NTRR with regional control, distant control or RFS. Further characterization of tracked lymph nodes revealed that 70% (nZ19) were purely cystic or had a cystic component. Many of the highly cystic nodes demonstrated an initial increase in size, likely secondary to RT induced inflammation, before later regression. Conclusion: CToR enables accurate GTV tracking during treatment which carries prognostic value. PTRR of 25% at midtreatment can be used as an indicator for LC, RFS, and OS. NTRR does not appear to carry the same prognostic value as PTRR due to variations in nodal architecture, an initial paradoxical size increase, especially in cystic nodes, and treatment response often continuing up to 4 months post-RT. The clinical implication of TRR appears to be of most value when tracking the primary lesion, and specifically if PTRR <25% treatment plan modification may be warranted given a higher likelihood of treatment failure.

Research paper thumbnail of Evaluation of PET/MRI for Tumor Volume Delineation for Head and Neck Cancer

Frontiers in oncology, 2017

Computed tomography (CT), combined positron emitted tomography and CT (PET/CT), and magnetic reso... more Computed tomography (CT), combined positron emitted tomography and CT (PET/CT), and magnetic resonance imaging (MRI) are commonly used in head and neck radiation planning. Hybrid PET/MRI has garnered attention for potential added value in cancer staging and treatment planning. Herein, we compare PET/MRI vs. planning CT for head and neck cancer gross tumor volume (GTV) delineation. We prospectively enrolled patients with head and neck cancer treated with definitive chemoradiation to 60-70 Gy using IMRT. We performed pretreatment contrast-enhanced planning CT and gadolinium-enhanced PET/MRI. Primary and nodal volumes were delineated on planning CT (GTV-CT) prospectively before treatment and PET/MRI (GTV-PET/MRI) retrospectively after treatment. GTV-PET/MRI was compared to GTV-CT using separate rigid registrations for each tumor volume. The Dice similarity coefficient (DSC) metric evaluating spatial overlap and modified Hausdorff distance (mHD) evaluating mean orthogonal distance diffe...

Research paper thumbnail of Dosimetric effect due to the motion during deep inspiration breath hold for left-sided breast cancer radiotherapy

Journal of applied clinical medical physics, Jul 8, 2015

Deep inspiration breath-hold (DIBH) radiotherapy for left-sided breast cancer can reduce cardiac ... more Deep inspiration breath-hold (DIBH) radiotherapy for left-sided breast cancer can reduce cardiac exposure and internal motion. We modified our in-house treatment planning system (TPS) to retrospectively analyze breath-hold motion log files to calculate the dosimetric effect of the motion during breath hold. Thirty left-sided supine DIBH breast patients treated using AlignRT were studied. Breath-hold motion was recorded - three translational and three rotational displacements of the treatment surface - the Real Time Deltas (RTD). The corresponding delivered dose was estimated using the beam-on portions of the RTDs. Each motion was used to calculate dose, and the final estimated dose was the equally weighted average of the multiple resultant doses. Ten of thirty patients had internal mammary nodes (IMN) purposefully included in the tangential fields, and we evaluated the percentage of IMN covered by 40 Gy. The planned and delivered heart mean dose, lungs V20 (volume of the lungs recei...

Research paper thumbnail of Prostate deformation from inflatable rectal probe cover and dosimetric effects in prostate seed implant brachytherapy

Medical physics, 2016

Prostate brachytherapy is an important treatment technique for patients with localized prostate c... more Prostate brachytherapy is an important treatment technique for patients with localized prostate cancer. An inflatable rectal ultrasound probe cover is frequently utilized during the procedure to adjust for unfavorable prostate position relative to the implant grid. However, the inflated cover causes prostate deformation, which is not accounted for during dosimetric planning. Most of the therapeutic dose is delivered after the procedure when the prostate and surrounding organs-at-risk are less deformed. The aim of this study is to quantify the potential dosimetry changes between the initial plan (prostate deformed) and the more realistic dosimetry when the prostate is less deformed without the cover. The authors prospectively collected the ultrasound images of the prostate immediately preceding and just after inflation of the rectal probe cover from thirty-four consecutive patients undergoing real-time planning of I-125 permanent seed implant. Manual segmentations of the deformed and...

Research paper thumbnail of Classification of Prostate Cancer Grades and T-Stages Based on Tissue Elasticity Using Medical Image Analysis

Lecture Notes in Computer Science, 2016

Research paper thumbnail of Accurate Segmentation of CT Male Pelvic Organs via Regression-based Deformable Models and Multi-task Random Forests

IEEE transactions on medical imaging, Jun 18, 2016

Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. ... more Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a nonlocal external force f...

Research paper thumbnail of Estimating the 4D Respiratory Lung Motion by Spatiotemporal Registration and Building Super-Resolution Image

Springer eBooks, 2011

The estimation of lung motion in 4D-CT with respect to the respiratory phase becomes more and mor... more The estimation of lung motion in 4D-CT with respect to the respiratory phase becomes more and more important for radiation therapy of lung cancer. Modern CT scanner can only scan a limited region of body at each couch table position. Thus, motion artifacts due to the patient's free breathing during scan are often observable in 4D-CT, which could undermine the procedure of correspondence detection in the registration. Another challenge of motion estimation in 4D-CT is how to keep the lung motion consistent over time. However, the current approaches fail to meet this requirement since they usually register each phase image to a pre-defined phase image independently, without considering the temporal coherence in 4D-CT. To overcome these limitations, we present a unified approach to estimate the respiratory lung motion with two iterative steps. First, we propose a new spatiotemporal registration algorithm to align all phase images of 4D-CT (in low-resolution) onto a high-resolution group-mean image in the common space. The temporal consistency is persevered by introducing the concept of temporal fibers for delineating the spatiotemporal behavior of lung motion along the respiratory phase. Second, the idea of super resolution is utilized to build the group-mean image with more details, by integrating the highly-redundant image information contained in the multiple respiratory phases. Accordingly, by establishing the correspondence of each phase image w.r.t. the high-resolution group-mean image, the difficulty of detecting correspondences between original phase images with missing structures is greatly alleviated, thus more accurate registration results can be achieved. The performance of our proposed 4D motion estimation method has been extensively evaluated on a public lung dataset. In all experiments, our method achieves more accurate and consistent results in lung motion estimation than all other state-of-the-art approaches. MI MP ME MI MP ME Before super resolution After super resolution

Research paper thumbnail of Dual Shape Guided Segmentation Network for Organs-at-Risk in Head and Neck CT Images

arXiv (Cornell University), Oct 23, 2021

The accurate segmentation of organs-at-risk (OARs) in head and neck CT images is a critical step ... more The accurate segmentation of organs-at-risk (OARs) in head and neck CT images is a critical step for radiation therapy of head and neck cancer patients. However, manual delineation for numerous OARs is time-consuming and laborious, even for expert oncologists. Moreover, manual delineation results are susceptible to high intra-and inter-variability. To this end, we propose a novel dual shape guided network (DSGnet) to automatically delineate nine important OARs in head and neck CT images. To deal with the large shape variation and unclear boundary of OARs in CT images, we represent the organ shape using an organ-specific unilateral inverse-distance map (UIDM) and guide the segmentation task from two different perspectives: direct shape guidance by following the segmentation prediction and across shape guidance by sharing the segmentation feature. In the direct shape guidance, the segmentation prediction is not only supervised by the true label mask, but also by the true UIDM, which is implemented through a simple yet effective encoder-decoder mapping from the label space to the distance space. In the across shape guidance, UIDM is used to facilitate the segmentation by optimizing the shared feature maps. For the experiments, we build a large head and neck CT dataset with a total of 699 images from different volunteers, and conduct comprehensive experiments and comparisons with other state-of-theart methods to justify the effectiveness and efficiency of our proposed method. The overall Dice Similarity Coefficient (DSC) value of 0.842 across the nine important OARs demonstrates great potential applications in improving the delineation quality and reducing the time cost.

Research paper thumbnail of Asymmetrical Multi-task Attention U-Net for the Segmentation of Prostate Bed in CT Image

Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, 2020

Segmentation of the prostate bed, the residual tissue after the removal of the prostate gland, is... more Segmentation of the prostate bed, the residual tissue after the removal of the prostate gland, is an essential prerequisite for post-prostatectomy radiotherapy but also a challenging task due to its non-contrast boundaries and highly variable shapes relying on neighboring organs. In this work, we propose a novel deep learning-based method to automatically segment this "invisible target". As the main idea of our design, we expect to get reference from the surrounding normal structures (bladder&rectum) and take advantage of this information to facilitate the prostate bed segmentation. To achieve this goal, we first use a U-Net as the backbone network to perform the bladder&rectum segmentation, which serves as a low-level task that can provide references to the high-level task of the prostate bed segmentation. Based on the backbone network, we build a novel attention network with a series of cascaded attention modules to further extract discriminative features for the high-level prostate bed segmentation task. Since the attention network has onesided dependency on the backbone network, simulating the clinical workflow to use normal structures to guide the segmentation of radiotherapy target, we name the final composition model asymmetrical multi-task attention U-Net. Extensive experiments on a clinical dataset consisting of 186 CT images demonstrate the effectiveness of this new design and the superior performance of the model in comparison to the conventional atlas-based methods for prostate bed segmentation. The source code is publicly available at .

Research paper thumbnail of Iterative Label Denoising Network: Segmenting Male Pelvic Organs in CT From 3D Bounding Box Annotations

IEEE Transactions on Biomedical Engineering, 2020

Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rect... more Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rectum) in CT images is critical for radiotherapy of prostate cancer. Currently, the leading automatic segmentation algorithms are based on Fully Convolutional Networks (FCNs), which achieve remarkable performance but usually need large-scale datasets with high-quality voxel-wise annotations for full supervision of the training. Unfortunately, such annotations are difficult to acquire, which becomes a bottleneck to build accurate segmentation models in real clinical applications. In this paper, we propose a novel weakly supervised segmentation approach that only needs 3D bounding box annotations covering the organs of interest to start the training. Obviously, the bounding box includes many non-organ voxels that carry noisy labels to mislead the

Research paper thumbnail of STRAINet: Spatially Varying sTochastic Residual AdversarIal Networks for MRI Pelvic Organ Segmentation

IEEE transactions on neural networks and learning systems, Jan 9, 2018

Accurate segmentation of pelvic organs is important for prostate radiation therapy. Modern radiat... more Accurate segmentation of pelvic organs is important for prostate radiation therapy. Modern radiation therapy starts to use a magnetic resonance image (MRI) as an alternative to computed tomography image because of its superior soft tissue contrast and also free of risk from radiation exposure. However, segmentation of pelvic organs from MRI is a challenging problem due to inconsistent organ appearance across patients and also large intrapatient anatomical variations across treatment days. To address such challenges, we propose a novel deep network architecture, called ``Spatially varying sTochastic Residual AdversarIal Network'' (STRAINet), to delineate pelvic organs from MRI in an end-to-end fashion. Compared to the traditional fully convolutional networks (FCN), the proposed architecture has two main contributions: 1) inspired by the recent success of residual learning, we propose an evolutionary version of the residual unit, i.e., stochastic residual unit, and use it to t...

Research paper thumbnail of Joint Learning of Image Regressor and Classifier for Deformable Segmentation of CT Pelvic Organs

Lecture Notes in Computer Science, 2015

The segmentation of pelvic organs from CT images is an essential step for prostate radiation ther... more The segmentation of pelvic organs from CT images is an essential step for prostate radiation therapy. However, due to low tissue contrast and large anatomical variations, it is still challenging to accurately segment these organs from CT images. Among various existing methods, deformable models gain popularity as it is easy to incorporate shape priors to regularize the final segmentation. Despite this advantage, the sensitivity to the initialization is often a pain for deformable models. In this paper, we propose a novel way to guide deformable segmentation, which could greatly alleviate the problem caused by poor initialization. Specifically, random forest is adopted to jointly learn image regressor and classifier for each organ. The image regressor predicts the 3D displacement from any image voxel to the organ boundary based on the local appearance of this voxel. It is used as an external force to drive each vertex of deformable model (3D mesh) towards the target organ boundary. Once the deformable model is close to the boundary, the organ likelihood map, provided by the learned classifier, is used to further refine the segmentation. In the experiments, we applied our method to segmenting prostate, bladder and rectum from planning CT images. Experimental results show that our method can achieve competitive performance over existing methods, even with very rough initialization.

Research paper thumbnail of SU-E-T-592: Comparison of Low Dose Volume and Integral Dose in Rotational Arc Radiation Therapy Modalities

Medical Physics, 2012

Innovation: Helical tomotherapy (HT) is known for generating larger low dose volume and higher in... more Innovation: Helical tomotherapy (HT) is known for generating larger low dose volume and higher integral dose than fixed gantry IMRT with conventional number of beams. It is unclear if this is still true and what magnitude is when comparing tomotherapy, especially the thin 1 cm jaw treatment plan, with other volumetric modulated arc therapy modalities (VMAT).

Research paper thumbnail of Estimating the 4D respiratory lung motion by spatiotemporal registration and super-resolution image reconstruction

Carolina Digital Repository (University of North Carolina at Chapel Hill), 2013

One of the main challenges in lung cancer radiation therapy is how to reduce the treatment margin... more One of the main challenges in lung cancer radiation therapy is how to reduce the treatment margin but accommodate the geometric uncertainty of moving tumor. 4D-CT is able to provide the full range of motion information for the lung and tumor. However, accurate estimation of lung motion with respect to the respiratory phase is difficult due to various challenges in image registration, e.g., motion artifacts and large interslice thickness in 4D-CT. Meanwhile, the temporal coherence across respiration phases is usually not guaranteed in the conventional registration methods which consider each phase image in 4D-CT independently. To address these challenges, the authors present a unified approach to estimate the respiratory lung motion with two iterative steps. Methods: First, the authors propose a novel spatiotemporal registration algorithm to align all phase images of 4D-CT (in low-resolution) to a high-resolution group-mean image in the common space. The temporal coherence of registration is maintained by a set of temporal fibers that delineate temporal correspondences across different respiratory phases. Second, a super-resolution technique is utilized to build the high-resolution group-mean image with more anatomical details than any individual phase image, thus largely alleviating the registration uncertainty especially in correspondence detection. In particular, the authors use the concept of sparse representation to keep the group-mean image as sharp as possible. Results: The performance of our 4D motion estimation method has been extensively evaluated on both the simulated datasets and real lung 4D-CT datasets. In all experiments, our method achieves more accurate and consistent results in lung motion estimation than all other state-of-the-art approaches under comparison. Conclusions: The authors have proposed a novel spatiotemporal registration method to estimate the lung motion in 4D-CT. Promising results have been obtained, which indicates the high applicability of our method in clinical lung cancer radiation therapy.

Research paper thumbnail of Improving image-guided radiation therapy of lung cancer by reconstructing 4D-CT from a single free-breathing 3D-CT on the treatment day: Treatment day 4D-CT reconstruction

Carolina Digital Repository (University of North Carolina at Chapel Hill), 2012

Purpose: One of the major challenges of lung cancer radiation therapy is how to reduce the margin... more Purpose: One of the major challenges of lung cancer radiation therapy is how to reduce the margin of treatment field but also manage geometric uncertainty from respiratory motion. To this end, 4D-CT imaging has been widely used for treatment planning by providing a full range of respiratory motion for both tumor and normal structures. However, due to the considerable radiation dose and the limit of resource and time, typically only a free-breathing 3D-CT image is acquired on the treatment day for image-guided patient setup, which is often determined by the image fusion of the free-breathing treatment and planning day 3D-CT images. Since individual slices of two free breathing 3D-CTs are possibly acquired at different phases, two 3D-CTs often look different, which makes the image registration very challenging. This uncertainty of pretreatment patient setup requires a generous margin of radiation field in order to cover the tumor sufficiently during the treatment. In order to solve this problem, our main idea is to reconstruct the 4D-CT (with full range of tumor motion) from a single free-breathing 3D-CT acquired on the treatment day. Methods: We first build a super-resolution 4D-CT model from a low-resolution 4D-CT on the planning day, with the temporal correspondences also established across respiratory phases. Next, we propose a 4D-to-3D image registration method to warp the 4D-CT model to the treatment day 3D-CT while also accommodating the new motion detected on the treatment day 3D-CT. In this way, we can more precisely localize the moving tumor on the treatment day. Specifically, since the free-breathing 3D-CT is actually the mixed-phase image where different slices are often acquired at different respiratory phases, we first determine the optimal phase for each local image patch in the free-breathing 3D-CT to obtain a sequence of partial 3D-CT images (with incomplete image data at each phase) for the treatment day. Then we reconstruct a new 4D-CT for the treatment day by registering the 4D-CT of the planning day (with complete information) to the sequence of partial 3D-CT images of the treatment day, under the guidance of the 4D-CT model built on the planning day. Results: We first evaluated the accuracy of our 4D-CT model on a set of lung 4D-CT images with manually labeled landmarks, where the maximum error in respiratory motion estimation can be reduced from 6.08 mm by diffeomorphic Demons to 3.67 mm by our method. Next, we evaluated our proposed 4D-CT reconstruction algorithm on both simulated and real free-breathing images. The reconstructed 4D-CT using our algorithm shows clinically acceptable accuracy and could be used to guide a more accurate patient setup than the conventional method. Conclusions: We have proposed a novel two-step method to reconstruct a new 4D-CT from a single free-breathing 3D-CT on the treatment day. Promising reconstruction results imply the possible application of this new algorithm in the image guided radiation therapy of lung cancer.

Research paper thumbnail of Boundary Coding Representation for Organ Segmentation in Prostate Cancer Radiotherapy

IEEE Transactions on Medical Imaging, 2021

Accurate segmentation of the prostate and organs at risk (OARs, e.g., bladder and rectum) in male... more Accurate segmentation of the prostate and organs at risk (OARs, e.g., bladder and rectum) in male pelvic CT images is a critical step for prostate cancer radiotherapy. Unfortunately, the unclear organ boundary and large shape variation make the segmentation task very challenging. Previous studies usually used representations defined directly on unclear boundaries as context information to guide segmentation. Those boundary representations may not be so discriminative, resulting in limited performance improvement. To this end, we propose a novel boundary coding network (BCnet) to learn a discriminative representation for organ boundary and use it as the context information to guide the segmentation. Specifically, we design a two-stage learning strategy in the proposed BCnet: 1) Boundary coding representation learning. Two sub-networks under the supervision of the dilation and erosion masks transformed from the manually delineated organ mask are first separately trained to learn the spatial-semantic context near the organ boundary. Then we encode the organ boundary based on the predictions of these two sub-networks and design a multi-atlas based refinement strategy by transferring the knowledge from training data to inference. 2) Organ segmentation. The boundary coding representation as context information, in addition to the image patches, are used to train the final segmentation network. Experimental results on a large and diverse male pelvic CT dataset show that our method achieves superior performance compared with several state-of-the-art methods.

Research paper thumbnail of Compensation of intrafractional motion for lung stereotactic body radiotherapy (SBRT) on helical TomoTherapy

Biomedical Physics & Engineering Express, 2019

Helical TomoTherapy has unique challenges in handling intrafractional motion compared to a 30 con... more Helical TomoTherapy has unique challenges in handling intrafractional motion compared to a 30 conventional LINAC. In this study, we analyzed the impact of intrafractional motion on cumulative dosimetry using actual patient motion data from clinically treated patients and investigated real time jaw and multileaf collimator (MLC) compensation approaches to minimize the motioninduced dose discrepancy in clinically acceptable TomoTherapy lung SBRT treatments. Intrafractional motion traces from eight fiducial tracking CyberKnife lung tumor treatment cases 35 were used in this study. These cases were re-planned on TomoTherapy for SBRT, with 18 Gy × 3 fractions to a planning target volume (PTV) defined on the breath-hold CT without ITV expansion. Each case was planned with four different jaw settings: 1 cm static, 2.5 cm static, 2.5 cm dynamic and 5 cm dynamic. In-house 4D dose accumulation software was used to compute the dose distributions with tumor motion and then compensate for that motion by modifying the 40 original jaw and MLC positions to track the trajectory of the tumor. The impact of motion and effectiveness of compensation on the PTV coverage depends on the motion type and plan settings. On average, the PTV V100% (the percent volume of the PTV receiving the prescription dose) accumulated from three fractions changed from 96.6% (motion-free) to 83.1% (motionincluded), 97.5% to 93.0%, 97.7% to 92.1%, and 98.1% to 93.7% for the 1 cm static jaw, 2.5 cm 45 static jaw, 2.5 cm dynamic jaw and 5 cm dynamic jaw setting, respectively. When the jaw and MLC compensation algorithm was engaged, the PTV V100% was restored back to 92.2%, 95.9%, 96.6% and 96.4%, for the four jaw settings mentioned above respectively. TomoTherapy lung tumor SBRT treatments using a field width of 2.5 cm or larger are less sensitive to motion than treatments using a 1 cm field width. For 1 cm field width plans, PTV coverage can be greatly 50

Research paper thumbnail of Reconstructing Tissue Properties From Medical Images With Application in Cancer Screening

IEEE Transactions on Medical Robotics and Bionics, 2019

In this paper, we describe a method for recovering the tissue properties directly from medical im... more In this paper, we describe a method for recovering the tissue properties directly from medical images and study the correlation of tissue (i.e. prostate) elasticity with the aggressiveness of prostate cancer using medical image analysis. Methods: We present a novel method that uses geometric and physical constraints to deduce the relative tissue elasticity parameters. Although elasticity reconstruction, or elastograph, can be used to estimate tissue elasticity, it is less suited for invivo measurements or deeply seated organs like prostate. We develop a method to estimate tissue elasticity values based on pairs of images, using a finite-element based biomechanical model derived from an initial set of images, local displacements and an optimization-based framework. Results: We demonstrate the feasibility of a statistically based classifier that automatically provides a clinical T-stage and Gleason score based on the elasticity values reconstructed from computed tomography (CT) images. Conclusions: We study the relative elasticity parameters by performing cancer Grading/Staging prediction and achieve up to 85% accuracy for cancer Staging prediction and up to 77% accuracy for cancer Grading prediction using feature set which includes recovered relative elasticity parameters and patient age information.

Research paper thumbnail of Medical Image Synthesis with Deep Convolutional Adversarial Networks

IEEE transactions on bio-medical engineering, Jan 9, 2018

Medical imaging plays a critical role in various clinical applications. However, due to multiple ... more Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model (ACM) to implement a context...

Research paper thumbnail of Medical Image Synthesis with Context-Aware Generative Adversarial Networks

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention, 2017

Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment... more Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment planning and also PET attenuation correction in MRI/PET scanner. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve radiations. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiation planning. In this paper, we propose a data-driven approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate CT given the MR image. To better model the nonlinear mapping from MRI to CT and produce more realistic images, we propose to use the adversarial training strategy to train the FCN. Moreover, we propose an image-gradient-difference based loss function to alleviate the blurriness of the generated CT. We further apply Auto-Cont...

Research paper thumbnail of Comparison of Tumor Volume Delineation on Magnetic Resonance/Positron Emission Tomography Versus Standard Computed Tomography for Head and Neck Cancer: Is There Added Value?

International Journal of Radiation Oncology*Biology*Physics, 2016

The objective of this study is to evaluate weekly primary tumor regression rates (PTRR) and nodal... more The objective of this study is to evaluate weekly primary tumor regression rates (PTRR) and nodal tumor regression rates (NTRR) of head and neck cancers (HNC) during radiation (RT) as a prognostic indicator of oncologic outcomes and survival. Image guided radiation therapy (IGRT), specifically computed tomography (CT)-on-Rails (CToR), increases the accuracy of daily RT and additionally affords the opportunity for intratreatment response evaluation. Materials/Methods: A single-institution retrospective review from 2008 to 2013 was completed for patients with HNC who received RT with CToR. Forty-three patients with 70 measurable targets, 43 primary lesions and 27 metastatic lymph nodes, met inclusion criteria. Patients without radiographically evident primary tumors and those with surgical intervention prior to RT were excluded. Results: The analysis included 43 patients with a median age of 56 years (21-78), and 91% of them were male. The majory of patients were diagnosed with oropharynx cancers (63%), 26% nasopharynx and 11% sinonasal. Fifty-eight percent of patients received definitive chemoradiation, and 26% received induction followed by chemoradiation. The mean primary and nodal gross tumor volume (GTV) pre-RT was 38.5 mL (standard deviation [SD] 34.9) and 13.6 mL (SD 10.3), respectively. PTRR of 25% at fraction 15 (nZ26) was associated with superior 5-year local control (LC) (100% vs 70%, PZ.003), relapse-free survival (RFS; 84% vs 45%, PZ.007), and overall survival (OS; 88% vs 50%, PZ.005). On both univariate and multivariate analysis, PTRR <25% was associated with increased hazard of local failure compared to 25% (PZ.001 for both). There was not a strong independent correlation of NTRR with regional control, distant control or RFS. Further characterization of tracked lymph nodes revealed that 70% (nZ19) were purely cystic or had a cystic component. Many of the highly cystic nodes demonstrated an initial increase in size, likely secondary to RT induced inflammation, before later regression. Conclusion: CToR enables accurate GTV tracking during treatment which carries prognostic value. PTRR of 25% at midtreatment can be used as an indicator for LC, RFS, and OS. NTRR does not appear to carry the same prognostic value as PTRR due to variations in nodal architecture, an initial paradoxical size increase, especially in cystic nodes, and treatment response often continuing up to 4 months post-RT. The clinical implication of TRR appears to be of most value when tracking the primary lesion, and specifically if PTRR <25% treatment plan modification may be warranted given a higher likelihood of treatment failure.

Research paper thumbnail of Evaluation of PET/MRI for Tumor Volume Delineation for Head and Neck Cancer

Frontiers in oncology, 2017

Computed tomography (CT), combined positron emitted tomography and CT (PET/CT), and magnetic reso... more Computed tomography (CT), combined positron emitted tomography and CT (PET/CT), and magnetic resonance imaging (MRI) are commonly used in head and neck radiation planning. Hybrid PET/MRI has garnered attention for potential added value in cancer staging and treatment planning. Herein, we compare PET/MRI vs. planning CT for head and neck cancer gross tumor volume (GTV) delineation. We prospectively enrolled patients with head and neck cancer treated with definitive chemoradiation to 60-70 Gy using IMRT. We performed pretreatment contrast-enhanced planning CT and gadolinium-enhanced PET/MRI. Primary and nodal volumes were delineated on planning CT (GTV-CT) prospectively before treatment and PET/MRI (GTV-PET/MRI) retrospectively after treatment. GTV-PET/MRI was compared to GTV-CT using separate rigid registrations for each tumor volume. The Dice similarity coefficient (DSC) metric evaluating spatial overlap and modified Hausdorff distance (mHD) evaluating mean orthogonal distance diffe...

Research paper thumbnail of Dosimetric effect due to the motion during deep inspiration breath hold for left-sided breast cancer radiotherapy

Journal of applied clinical medical physics, Jul 8, 2015

Deep inspiration breath-hold (DIBH) radiotherapy for left-sided breast cancer can reduce cardiac ... more Deep inspiration breath-hold (DIBH) radiotherapy for left-sided breast cancer can reduce cardiac exposure and internal motion. We modified our in-house treatment planning system (TPS) to retrospectively analyze breath-hold motion log files to calculate the dosimetric effect of the motion during breath hold. Thirty left-sided supine DIBH breast patients treated using AlignRT were studied. Breath-hold motion was recorded - three translational and three rotational displacements of the treatment surface - the Real Time Deltas (RTD). The corresponding delivered dose was estimated using the beam-on portions of the RTDs. Each motion was used to calculate dose, and the final estimated dose was the equally weighted average of the multiple resultant doses. Ten of thirty patients had internal mammary nodes (IMN) purposefully included in the tangential fields, and we evaluated the percentage of IMN covered by 40 Gy. The planned and delivered heart mean dose, lungs V20 (volume of the lungs recei...

Research paper thumbnail of Prostate deformation from inflatable rectal probe cover and dosimetric effects in prostate seed implant brachytherapy

Medical physics, 2016

Prostate brachytherapy is an important treatment technique for patients with localized prostate c... more Prostate brachytherapy is an important treatment technique for patients with localized prostate cancer. An inflatable rectal ultrasound probe cover is frequently utilized during the procedure to adjust for unfavorable prostate position relative to the implant grid. However, the inflated cover causes prostate deformation, which is not accounted for during dosimetric planning. Most of the therapeutic dose is delivered after the procedure when the prostate and surrounding organs-at-risk are less deformed. The aim of this study is to quantify the potential dosimetry changes between the initial plan (prostate deformed) and the more realistic dosimetry when the prostate is less deformed without the cover. The authors prospectively collected the ultrasound images of the prostate immediately preceding and just after inflation of the rectal probe cover from thirty-four consecutive patients undergoing real-time planning of I-125 permanent seed implant. Manual segmentations of the deformed and...

Research paper thumbnail of Classification of Prostate Cancer Grades and T-Stages Based on Tissue Elasticity Using Medical Image Analysis

Lecture Notes in Computer Science, 2016

Research paper thumbnail of Accurate Segmentation of CT Male Pelvic Organs via Regression-based Deformable Models and Multi-task Random Forests

IEEE transactions on medical imaging, Jun 18, 2016

Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. ... more Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a nonlocal external force f...