CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation (original) (raw)
Data availability
Example data and a mitochondria pretrained model are included in the GitHub release (https://github.com/CRBS/cdeep3m) and several trained models with example data are released on the Cell Image Library (CIL, http://cellimagelibrary.org/cdeep3m). Further data will be made available from the corresponding authors upon reasonable request.
References
- Chen, B. C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).
Article Google Scholar - Bock, D. D. et al. Network anatomy and in vivo physiology of visual cortical neurons. Nature 471, 177–184 (2011).
Article CAS Google Scholar - Briggman, K. L., Helmstaedter, M. & Denk, W. Wiring specificity in the direction-selectivity circuit of the retina. Nature 471, 183–190 (2011).
Article CAS Google Scholar - Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-net: learning dense volumetric segmentation from sparse annotation. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 9901 LNCS, 424–432 (2016)..
- Quan, T. M., Hildebrand, D. G. C. & Jeong, W.-K. FusionNet: a deep fully residual convolutional neural network for image segmentation in connectomics. Preprint in arXiv, https://arxiv.org/abs/1612.05360v2 (2016).
- Badrinarayanan, V., Kendall, A. & Cipolla, R. SegNet: a deep convolutional encoder-decoder architecture forimage segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
Article Google Scholar - Zeng, T., Wu, B. & Ji, S. DeepEM3D: approaching human-level performance on 3D anisotropic EM image segmentation. Bioinformatics 33, 2555–2562 (2017).
Article CAS Google Scholar - Jia, Y. et al. Caffe: convolutional architecture for fast feature embedding. Proceedings of the 22nd ACM International Conference on Multimedia, New York, NY, 675–678 (ACM) (2014). https://doi.org/10.1145/2647868.2654889
- Jinno, S. & Kosaka, T. Stereological estimation of numerical densities of glutamatergic principal neurons in the mouse hippocampus. Hippocampus 20, 829–840 (2010).
PubMed Google Scholar - Abusaad, I. et al. Stereological estimation of the total number of neurons in the murine hippocampus using the optical disector. J. Comp. Neurol. 408, 560–566 (1999).
Article CAS Google Scholar - Sommer, C., Straehle, C., Kothe, U. & Hamprecht, F. A. Ilastik: Interactive learning and segmentation toolkit. in Proceedings of the IEEE International Symposium on Biomedical Imaging 230–233 (2011). https://doi.org/10.1109/ISBI.2011.5872394
- Perez, A. J. et al. A workflow for the automatic segmentation of organelles in electron microscopy image stacks. Front. Neuroanat. 8, 126 (2014).
Article Google Scholar - Lucchi, A., Becker, C., Márquez Neila, P. & Fua, P. Exploiting enclosing membranes and contextual cues for mitochondria segmentation. in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2014 (eds. Golland, P., Hata, N., Barillot, C., Hornegger, J. & Howe, R.) 65–72 (Springer International Publishing, 2014).
- Pan, S. J. & Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345–1359 (2010).
Article Google Scholar - Kasthuri, N. et al. Saturated reconstruction of a volume of neocortex. Cell 162, 648–661 (2015).
Article CAS Google Scholar - Deerinck, T. et al. Enhancing serial block-face scanning electron microscopy to enable high resolution 3-Dnanohistology of cells and tissues. Microsc. Microanal. 16, 1138–1139 (2010).
Article CAS Google Scholar - Phan, S. et al. 3D reconstruction of biological structures: automated procedures for alignment andreconstruction of multiple tilt series in electron tomography. Adv. Struct. Chem.Imaging. 2, 8 (2017).
Article Google Scholar
Acknowledgements
We thank the DIVE lab for making DeepEM3D publicly available. We thank T. Zeng, A. Lucchi and P. Fua for initial discussions and S. Viana da Silva for critical feedback on the manuscript. We thank S. Yeon, N. Allaway, and C. Nava-Gonzales for help with ground truth segmentations for the membrane training data and mitochondria segmentations and C. Li, J. Shergill, I. Tang, M.M., and R.A. for synaptic vesicle annotations. M.G.H. and R.A. proof edited and performed other ground-truth segmentations. Research published in this manuscript leveraged multiple NIH grants 5R01DA038896 and 5P01NS083514 as well as 5P41GM103412, 5P41GM103426, and 5R01GM082949 supporting the National Center for Microscopy and Imaging Research (NCMIR), the National Biomedical Computation Resource (NBCR), and the Cell Image Library (CIL), respectively. M.G.H. was supported by a postdoctoral fellowship from an interdisciplinary seed program at UCSD to build multiscale 3D maps of whole cells, called the Visible Molecular Cell Consortium. This work benefitted from the use of compute cycles on the Comet cluster, a resource of the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. This research benefitted from the use of credits from the National Institutes of Health (NIH) Cloud Credits Model Pilot, a component of the NIH Big Data to Knowledge (BD2K) program.
Author information
Authors and Affiliations
- National Center for Microscopy and Imaging Research, School of Medicine, University of California San Diego, La Jolla, CA, USA
Matthias G. Haberl, Lucas Tindall, Daniela Boassa, Sébastien Phan, Eric A. Bushong, Matthew Madany, Raffi Akay, Thomas J. Deerinck, Steven T. Peltier & Mark H. Ellisman - National Biomedical Computation Resource, University of California San Diego, La Jolla, CA, USA
Matthias G. Haberl, Christopher Churas, Sébastien Phan, Steven T. Peltier & Mark H. Ellisman
Authors
- Matthias G. Haberl
You can also search for this author inPubMed Google Scholar - Christopher Churas
You can also search for this author inPubMed Google Scholar - Lucas Tindall
You can also search for this author inPubMed Google Scholar - Daniela Boassa
You can also search for this author inPubMed Google Scholar - Sébastien Phan
You can also search for this author inPubMed Google Scholar - Eric A. Bushong
You can also search for this author inPubMed Google Scholar - Matthew Madany
You can also search for this author inPubMed Google Scholar - Raffi Akay
You can also search for this author inPubMed Google Scholar - Thomas J. Deerinck
You can also search for this author inPubMed Google Scholar - Steven T. Peltier
You can also search for this author inPubMed Google Scholar - Mark H. Ellisman
You can also search for this author inPubMed Google Scholar
Contributions
M.G.H. and M.H.E. conceived and designed the project. M.G.H., C.C., L.T., and M.M. wrote code and analyzed data. M.G.H., D.B., S.P., E.A.B., and T.J.D., performed experiments and acquired images. M.G.H., R.A., and M.M. annotated training data. M.G.H., C.C., S.T.P., and M.H.E. wrote the manuscript with feedback from all authors.
Corresponding authors
Correspondence toMatthias G. Haberl or Mark H. Ellisman.
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Integrated supplementary information
Supplementary Figure 1 Cell density in the mouse hippocampus from the dentate gyrus to the molecular layer using XRM scanning.
a, Densities (number of cells/mm3) were measured by using a sliding volume of 20-µm diameter (* 385 µm * 132 µm) across 394 µm, after extracting 3D connected components in CDeep3M predictions of nuclei in the hippocampal XRM scan. Representative images are shown in b–e from the center of the suprapyramidal blade of the dentate gyrus (sDG) (b), to the molecular layer e. Scale bar, 50 µm.
Supplementary Figure 2 Comparison of membrane segmentation on ssET data using CDeep3M and three widely used machine learning algorithms.
a, Challenging segmentation tasks, such as recognition of membranes in the electron tomography dataset, cannot be solved with sufficient accuracy using widely used machine learning tools, such as CHM (Front. Neuroanat. 8, 2014), Ilastik (Proc. Int. Symposium Biomed. Imaging 230–233, 2011), or the Trainable Weka Segmentation (Bioinformatics 33, 2424–2426, 2017) (resulting in large missing membranes or widespread introduction of false positive signal) and requires deep learning tools, such as CDeep3M, to achieve a high level of accuracy. b, Because of the high accuracy of the predictions of CDeep3M, simpler postprocessing such as watershed and region-growing algorithms can be used to accomplish dense segmentation on a small scale. In comparison, we were unable to produce meaningful results using this approach on the prediction maps of the aforementioned machine learning tools. On a larger scale, more sophisticated region agglomeration techniques should be used (Nat. Methods 14, 101–102, 2017; A deep structured learning approach towards automating connectome reconstruction from 3D electron micrographs. Preprint at arXiv, https://arxiv.org/abs/1709.02974 (2017)) and will allow one to take full advantage of the membrane segmentation.
Supplementary Figure 3 Determination of the accuracy of CDeep3M mitochondria segmentation, based on one FIB–SEM and one SBEM dataset.
As noted in Lucchi et al., human ‘ground truth’ segmentations are typically inaccurate around the borders of an object. a, An exclusion zone of 1–2 voxels can compensate for this effect and avoid erroneously assigning those pixels. b, We determined the accuracy of CDeep3M prediction on an FIB–SEM hippocampal dataset, using the same metrics as described in Ref. 8. CDeep3M outperformed the three-class CRF in all metrics (Jaccard: CDeep3M: 0.8361 versus 3C-CFR: 0.741; two-voxel exclusion zone: CDeep3M: 0.9266, 3C-CFR: 0.85; five-voxel exclusion zone: CDeep3M: 0.9437, 3C-CFR: ~0.92). Both the Jaccard index and the F1 value (the harmonic mean of precision and recall) increase once the erroneously missing object boundaries in the human segmentation are masked by the exclusion zone. The remaining error was largely caused by a single large object in the test data, which resembled the appearance of a mitochondrion and which was absent from the training data. c, d, Similarly, we used the SBEM data shown in c (scale bars: left, 500 nm; right, 200 nm) to compare computer versus repeated human performance. d, The consensus of three ‘ground truth’ segmentations of expert human annotators was used to determine the performance of CDeep3M and compare the individual performance of each human annotator to the consensus. CDeep3M performed similar to the human experts (exclusion zone of 1 voxel; Jaccard index: CDeep3M: 0.954, humans (mean): 0.983; F1 value: CDeep3M: 0.976, humans (mean): 0.966).
Supplementary Figure 4 Training and validation loss and accuracy.
Training and validation loss (left panels) and validation accuracy (right panels) are shown for training performed on the mitochondria dataset evaluated in Supplementary Fig. 2b. All three models generalize well and improve on the unseen validation dataset until the end of the training.
Supplementary Figure 5 Membrane segmentation using transfer learning on a pre-trained model.
The 1fm model was trained for 16,000 iterations before segmenting an image from a different dataset (upper left panel). 2000 additional iterations of training were performed using training data of a new image dataset to adapt the trained model to the new image parameters (staining intensity and new features in the image). The segmentation quality substantially improved (lower middle panel), whereas it remained faulty with continued training on the first dataset without domain adaption (lower left panel). Similar improvements are seen for all trained models (1fm, 3fm domain adaption from 14,454 iterations until 15,757 iterations, and 5fm; Fig. 2).
Supplementary information
Rights and permissions
About this article
Cite this article
Haberl, M.G., Churas, C., Tindall, L. et al. CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation.Nat Methods 15, 677–680 (2018). https://doi.org/10.1038/s41592-018-0106-z
- Received: 15 February 2018
- Accepted: 19 July 2018
- Published: 31 August 2018
- Issue Date: September 2018
- DOI: https://doi.org/10.1038/s41592-018-0106-z