1DC19AT021 DARSHAN HEGDE - Academia.edu (original) (raw)
Uploads
Papers by 1DC19AT021 DARSHAN HEGDE
Pediatric Hematology Oncology Journal, 2020
cohort were statistically similar to the parent cohort. Induction death and complete remission ra... more cohort were statistically similar to the parent cohort. Induction death and complete remission rates were similar in the study cohort (0,100%) and the parent cohort (1.8%, 98%) respectively (p-value,1.0). Six patients in the study cohort (27%;6/22) suffered events (5 relapses, 1 toxic death) compared to 85 events (23%; 85/373; 67 relapses, 16 toxic deaths and 2 abandonment) in the parent cohort. None of the relapse patients in study cohort opted for further treatment and EFS was 72% compared to 74% (288/ 388) for the parent cohort with a median follow-up of 61 months (p-value, 0.8). D. Conclusion Five percent of patients with ALL were detected to have t(1;19). Clinical characteristics, response and outcome of these patients were similar to rest of the cohort. Presence of t(1;19) did not confer any prognostic significance independently.
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022
We present an end-to-end method for object detection and trajectory prediction utilizing multi-vi... more We present an end-to-end method for object detection and trajectory prediction utilizing multi-view representations of LiDAR returns and camera images. In this work, we recognize the strengths and weaknesses of different view representations, and we propose an efficient and generic fusing method that aggregates benefits from all views. Our model builds on a state-of-the-art Bird's-Eye View (BEV) network that fuses voxelized features from a sequence of historical LiDAR data as well as rasterized high-definition map to perform detection and prediction tasks. We extend this model with additional LiDAR Range-View (RV) features that use the raw LiDAR information in its native, nonquantized representation. The RV feature map is projected into BEV and fused with the BEV features computed from LiDAR and high-definition map. The fused features are then further processed to output the final detections and trajectories, within a single end-to-end trainable network. In addition, the RV fusion of LiDAR and camera is performed in a straightforward and computationally efficient manner using this framework. The proposed multi-view fusion approach improves the state-of-the-art on proprietary largescale real-world data collected by a fleet of self-driving vehicles, as well as on the public nuScenes data set with minimal increases on the computational cost.
Pediatric Hematology Oncology Journal, 2020
There is a significant amount of recent literature in the domain of self-driving vehicles (SDVs),... more There is a significant amount of recent literature in the domain of self-driving vehicles (SDVs), with researchers focusing on various components of the SDV system that have the potential to improve on-street performance. This includes work exploring improved perception of the SDV surroundings, or proposing algorithms providing better short-term prediction of nearby traffic actor behavior. However, in most cases, the authors report only aggregate metrics computed on the entire data, and often do not fully consider the bias inherent in the traffic data sets. We argue that this practice may not give a full picture of the actual performance of the prediction model, and in fact, may mask some of its problem areas (e.g., handling turns). We analyze the amount of bias present in traffic data and explore the ways to address this issue. In particular, we propose to use a novel off-road loss and standard bias mitigation techniques that result in improved performance. We further propose to av...
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019
In this paper, we present an extension to LaserNet, an efficient and state-of-the-art LiDAR based... more In this paper, we present an extension to LaserNet, an efficient and state-of-the-art LiDAR based 3D object detector. We propose a method for fusing image data with the LiDAR data and show that this sensor fusion method improves the detection performance of the model especially at long ranges. The addition of image data is straightforward and does not require image labels. Furthermore, we expand the capabilities of the model to perform 3D semantic segmentation in addition to 3D object detection. On a large benchmark dataset, we demonstrate our approach achieves state-of-the-art performance on both object detection and semantic segmentation while maintaining a low runtime.
Pediatric Hematology Oncology Journal, 2020
cohort were statistically similar to the parent cohort. Induction death and complete remission ra... more cohort were statistically similar to the parent cohort. Induction death and complete remission rates were similar in the study cohort (0,100%) and the parent cohort (1.8%, 98%) respectively (p-value,1.0). Six patients in the study cohort (27%;6/22) suffered events (5 relapses, 1 toxic death) compared to 85 events (23%; 85/373; 67 relapses, 16 toxic deaths and 2 abandonment) in the parent cohort. None of the relapse patients in study cohort opted for further treatment and EFS was 72% compared to 74% (288/ 388) for the parent cohort with a median follow-up of 61 months (p-value, 0.8). D. Conclusion Five percent of patients with ALL were detected to have t(1;19). Clinical characteristics, response and outcome of these patients were similar to rest of the cohort. Presence of t(1;19) did not confer any prognostic significance independently.
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022
We present an end-to-end method for object detection and trajectory prediction utilizing multi-vi... more We present an end-to-end method for object detection and trajectory prediction utilizing multi-view representations of LiDAR returns and camera images. In this work, we recognize the strengths and weaknesses of different view representations, and we propose an efficient and generic fusing method that aggregates benefits from all views. Our model builds on a state-of-the-art Bird's-Eye View (BEV) network that fuses voxelized features from a sequence of historical LiDAR data as well as rasterized high-definition map to perform detection and prediction tasks. We extend this model with additional LiDAR Range-View (RV) features that use the raw LiDAR information in its native, nonquantized representation. The RV feature map is projected into BEV and fused with the BEV features computed from LiDAR and high-definition map. The fused features are then further processed to output the final detections and trajectories, within a single end-to-end trainable network. In addition, the RV fusion of LiDAR and camera is performed in a straightforward and computationally efficient manner using this framework. The proposed multi-view fusion approach improves the state-of-the-art on proprietary largescale real-world data collected by a fleet of self-driving vehicles, as well as on the public nuScenes data set with minimal increases on the computational cost.
Pediatric Hematology Oncology Journal, 2020
There is a significant amount of recent literature in the domain of self-driving vehicles (SDVs),... more There is a significant amount of recent literature in the domain of self-driving vehicles (SDVs), with researchers focusing on various components of the SDV system that have the potential to improve on-street performance. This includes work exploring improved perception of the SDV surroundings, or proposing algorithms providing better short-term prediction of nearby traffic actor behavior. However, in most cases, the authors report only aggregate metrics computed on the entire data, and often do not fully consider the bias inherent in the traffic data sets. We argue that this practice may not give a full picture of the actual performance of the prediction model, and in fact, may mask some of its problem areas (e.g., handling turns). We analyze the amount of bias present in traffic data and explore the ways to address this issue. In particular, we propose to use a novel off-road loss and standard bias mitigation techniques that result in improved performance. We further propose to av...
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019
In this paper, we present an extension to LaserNet, an efficient and state-of-the-art LiDAR based... more In this paper, we present an extension to LaserNet, an efficient and state-of-the-art LiDAR based 3D object detector. We propose a method for fusing image data with the LiDAR data and show that this sensor fusion method improves the detection performance of the model especially at long ranges. The addition of image data is straightforward and does not require image labels. Furthermore, we expand the capabilities of the model to perform 3D semantic segmentation in addition to 3D object detection. On a large benchmark dataset, we demonstrate our approach achieves state-of-the-art performance on both object detection and semantic segmentation while maintaining a low runtime.