A biosegmentation benchmark for evaluation of bioimage analysis methods (original) (raw)
Related papers
IICBU 2008: a proposed benchmark suite for biological image analysis
Medical & Biological Engineering & Computing, 2008
New technology for automated biological image acquisition has introduced the need for effective biological image analysis methods. These algorithms are constantly being developed by pattern recognition and machine vision experts, who tailor general computer vision techniques to the specific needs of biological imaging. However, computer scientists do not always have access to biological image datasets that can be used for computer vision research, and biologist collaborators who can assist in defining the biological questions are not always available. Here we propose a publicly available benchmark suite of biological image datasets that can be used by machine vision experts for developing and evaluating biological image analysis methods. The suite represents a set of practical real-life imaging problems in biology, and offers examples of organelles, cells and tissues, imaged at different magnifications and different contrast techniques. All datasets are available for free download at http://ome.grc.nia.nih.gov/iicbu2008.
Predicting Segmentation Accuracy for Biological Cell Images
2010
We have performed segmentation procedures on a large number of images from two mammalian cell lines that were seeded at low density, in order to study trends in the segmentation results and make predictions about cellular features that affect segmentation accuracy. By comparing segmentation results from approximately 40000 cells, we find a linear relationship between the highest segmentation accuracy seen for a given cell and the fraction of pixels in the neighborhood of the edge of that cell. This fraction of pixels is at greatest risk for error when cells are segmented. We call the ratio of the size of this pixel fraction to the size of the cell the extended edge neighborhood and this metric can predict segmentation accuracy of any isolated cell.
Introducing Biomedisa as an open-source online platform for biomedical image segmentation
Nature Communications
We present Biomedisa, a free and easy-to-use open-source online platform developed for semi-automatic segmentation of large volumetric images. The segmentation is based on a smart interpolation of sparsely pre-segmented slices taking into account the complete underlying image data. Biomedisa is particularly valuable when little a priori knowledge is available, e.g. for the dense annotation of the training data for a deep neural network. The platform is accessible through a web browser and requires no complex and tedious configuration of software and model parameters, thus addressing the needs of scientists without substantial computational expertise. We demonstrate that Biomedisa can drastically reduce both the time and human effort required to segment large images. It achieves a significant improvement over the conventional approach of densely pre-segmented slices with subsequent morphological interpolation as well as compared to segmentation tools that also consider the underlying...
PLOS Computational Biology, 2021
We present DeepMIB, a new software package that is capable of training convolutional neural networks for segmentation of multidimensional microscopy datasets on any workstation. We demonstrate its successful application for segmentation of 2D and 3D electron and multicolor light microscopy datasets with isotropic and anisotropic voxels. We distribute DeepMIB as both an open-source multi-platform Matlab code and as compiled standalone application for Windows, MacOS and Linux. It comes in a single package that is simple to install and use as it does not require knowledge of programming. DeepMIB is suitable for everyone interested of bringing a power of deep learning into own image segmentation workflows.
Towards bridging the Gap between Biological and Computational Image Segmentation
2007
L'objectif de ce rapport est d'initier un effort de recherche axé sur uneétude conjointe de la vision biologique et de la vision algorithmique. Nous nous intéressons plus particulièrement aux modèles biologiquement plausibles liés au processus de segmentation d'images tel que proposé par S. Grossberg et ses collègues. Dans une première partie rédigée sous forme de tutoriel, nous abordons le problème de la modélisation du comportement et de la dynamique d'un neurone. Nous abordons ensuite le cas plus complexe d'un rśeau de neurones avant de nous focaliser plus particulièrement sur la classe des neurones qui interviennent dans le cortex visuel et plus spécifiquement dans les zones corticales V1/V2. Nous résumons ensuite de manière synthétique les travaux de S. Grossberg et ses collègues sur la modélisation biologiquement plausible du processus de segmentation. La mise en oeuvre de ses modèles B.C.S. (Boundary Contour System) et F.C.S. (Feature Contour System) qui forment la base du modèle de vision biologique FACADE (Form And Colour and DEpth) est présentée et discutée. Nous mettons en lumière certaines difficultés posées par la mise en oeuvre de ces modèles et proposons ensuite quelques modifications qui les simplifient tout en permettant de mieux les controler et d'améliorer sensiblement la qualité des résultats de segmentation obtenus. Le modèle simple et biologiquement plausible de segmentation que nous proposons est ensuite mis en parallèle pour comparaison avec certaines approches classiques proposées en Vision Algorithmique. Un lien avec les approches variationelles plus récemment introduites en Vision conclut enfin ce rapport illustré par plusieurs exemples de rsultats obtenus sur diverses images réelles.
Benchmark for multi-cellular segmentation of bright field microscopy images
BMC Bioinformatics, 2013
Background: Multi-cellular segmentation of bright field microscopy images is an essential computational step when quantifying collective migration of cells in vitro. Despite the availability of various tools and algorithms, no publicly available benchmark has been proposed for evaluation and comparison between the different alternatives.
Segmentation Evaluation for Fluorescence Microscopy Images of Biological Objects
Assessing the quality of image segmentation algorithms is an essential step towards the quantitative analysis of biological microscopy images. Given the limited accuracy of segmentation algorithms in all but trivial cases, it is particularly important to define an index to grade the quality of segmentations. Such an index can help guide the choice of algorithms for a particular application, assist in optimizing algorithm parameters, and provide a measure of quality when evaluating scientific conclusions drawn from the results of the segmentation. Motivated by the problem of segmenting microscopy images of thick tissue sections, we propose an approach to evaluate segmentation quality for images that contain a large number of objects (e.g., nuclei). The evaluation of such images is rendered difficult for two reasons, (i) the correspondence of components between two segmentations of the same image is often ambiguous, and (ii) the number of components in the image is typically too large to generate complete ground truth for. Existing evaluation techniques of segmentation algorithms are inadequate to be applied under these constraints. Our proposed evaluation strategy addresses both these constraints by suitably modifying a commonly accepted evaluation index. We demonstrate the efficacy of our proposed strategy towards the evaluation of typical segmentations of fluorescence microscopy images of cell nuclei.
Image Processing and Analysis for Biologists
Methods, 2017
Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits.
Biases from model assumptions in texture sub-cellular image segmentation
SPIE Newsroom, 2012
Actin is the most abundant protein in most multicellular animal cells. It forms a diverse array of structures, particularly filaments, that participate in important processes such as cell motility, division, and contraction. The location and structure of actin filaments used in cell motility has been studied at whole cell spatial resolution. But studies at the sub-cellular level are limited by the optical diffraction limits of light microscopes and by the destructive nature of imaging at resolutions higher than half of the wavelength of visible light. Previous studies at the sub-cellular level focused on actin interaction with myosin, another protein with which it often works in concert. 1 Our work focuses specifically on actin structures, and analyzes sub-cellular regions using optical confocal fluorescent microscopy images at 200 nanometer resolution.
BMC Bioinformatics
Background: Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter-and intra-observer variability which influence the validation results of automated cell segmentation pipelines. Results: We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. Conclusion: The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.