Segmentation, tracking, and sub-cellular feature extraction in 3D time-lapse images (original) (raw)
Related papers
Joint cell segmentation and tracking using cell proposals
2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), 2016
Time-lapse microscopy imaging has advanced rapidly in last few decades and is producing large volume of data in cell and developmental biology. This has increased the importance of automated analyses, which depend heavily on cell segmentation and tracking as these are the initial stages when computing most biologically important cell properties. In this paper, we propose a novel joint cell segmentation and tracking method for fluorescence microscopy sequences, which generates a large set of cell proposals, creates a graph representing different cell events and then iteratively finds the most probable path within this graph providing cell segmentations and tracks. We evaluate our method on three datasets from ISBI Cell Tracking Challenge and show that our greedy nonoptimal joint solution results in improved performance compared with state of the art methods.
Cell segmentation, tracking, and mitosis detection using temporal context
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention, 2005
The Large Scale Digital Cell Analysis System (LSDCAS) developed at the University of Iowa provides capabilities for extended-time live cell image acquisition. This paper presents a new approach to quantitative analysis of live cell image data. By using time as an extra dimension, level set methods are employed to determine cell trajectories from 2D + time data sets. When identifying the cell trajectories, cell cluster separation and mitotic cell detection steps are performed. Each of the trajectories corresponds to the motion pattern of an individual cell in the data set. At each time frame, number of cells, cell locations, cell borders, cell areas, and cell states are determined and recorded. The proposed method can help solving cell analysis problems of general importance including cell pedigree analysis and cell tracking. The developed method was tested on cancer cell image sequences and its performance compared with manually-defined ground truth. The similarity Kappa Index is 0....
Graph Neural Network for Cell Tracking in Microscopy Videos
2022
We present a novel graph neural network (GNN) approach for cell tracking in high-throughput microscopy videos. By modeling the entire time-lapse sequence as a direct graph where cell instances are represented by its nodes and their associations by its edges, we extract the entire set of cell trajectories by looking for the maximal paths in the graph. This is accomplished by several key contributions incorporated into an end-to-end deep learning framework. We exploit a deep metric learning algorithm to extract cell feature vectors that distinguish between instances of different biological cells and assemble same cell instances. We introduce a new GNN block type which enables a mutual update of node and edge feature vectors, thus facilitating the underlying message passing process. The message passing concept, whose extent is determined by the number of GNN blocks, is of fundamental importance as it enables the `flow' of information between nodes and edges much behind their neighb...
PACESS: Practical AI-based Cell Extraction and Spatial Statistics for large 3D biological images
bioRxiv (Cold Spring Harbor Laboratory), 2022
A central tenet of biology and medicine is that there is a functional meaning underlying the cellular organisation of tissues and organs. Recent advances in histopathology and microscopy have achieved detailed visualisation of an increasing number of cell types in situ. Efficient methodologies to extract data from 3D images and draw detailed statistical inferences are, however, still lacking. Here we present a pipeline that can identify the location and classification of millions of cells contained in large 3D biological images using object detection neural networks that have been trained on more readily annotated 2D data alone. To draw meaning from the resulting data, we introduce a series of statistical techniques that are tailored to work with spatial data, resulting in a 3D statistical map of the tissue from which multi-cellular relationships can be clearly understood. As illustrations of the power of the approach, we apply these techniques to bone marrow images from intravital microscopy (IVM) and clarified 3D thick sections. These examples demonstrate that precise, large-scale data extraction is feasible, and that statistical techniques that are specifically designed for spatial data can distinctly reveal coherent, useful biological information. Background Histology is a core technique for the examination of tissue structure at the cellular level. With advances in tissue processing, antibody staining and microscopy, the resulting images are evermore complex and can contain information on hundredsof-thousands of cells over increasingly large areas 1-5. Methods for data extraction and analysis following acquisition are, however, less developed 1, 3, 6-8 with image analysis currently being a challenging bottleneck for the field. The most widely used approaches involve segmenting cells from their background using intensity thresholding, akin to gating in flow cytometry, and then assessing their position against a basic null hypothesis that they are randomly located. 5, 9, 10. While a threshold-based segmentation approach is effective for clearly defined populations that are well-separated, it is not well-suited for more complex images with substantial background fluorescence, overlapping cells, cells in close proximity, or cells that require multivariate assessment for their classification. In addition, the examination of pairwise hypothesis testing is restrictive as it does not facilitate the direct examination of the relationships between cells, making it challenging to formulate definitive conclusions from these extensive, spatially dependent, high-dimensional data. Here we bridge that gap by introducing a modern data extraction and analysis pipeline that is tailored to be suitable for any such 3D data source, reducing the workflow burden and improving data interpretation. Deep neural networks are recognised as the gold standard for object classification and image segmentation for both 2D and 3D biological images 11-14. Object-detection deep neural networks have, however, only infrequently been applied within a 3D context, with the primary barrier being the difficulty in creating a sufficiently large annotated 3D dataset necessary for training 15, 16. Here we introduce a method that circumvents that difficulty, making the approach less burdensome. We use an augmented object detection deep neural network that is trained using 2D data alone, which can be rapidly annotated, and then applied to each image-layer in 3D data in turn. We then develop a method to combine the output from multiple image-layers to identify each cell's location, size and class within the 3D space.
Advances in Intelligent Systems and Computing, 2017
Recent developments in live-cell microscopy imaging have led to the emergence of Single Cell Biology. This field has also been supported by the development of cell segmentation and tracking algorithms for data extraction. The validation of these algorithms requires benchmark databases, with manually labeled or artificially generated images, so that the ground truth is known. To generate realistic artificial images, we have developed a simulation platform capable of generating biologically inspired objects with various shapes and size, which are able to grow, divide, move and form specific clusters. Using this platform, we compared four tracking algorithms: Simple Nearest-Neighbor (NN), NN with Morphology (NNm) and two DBSCAN-based methodologies. We show that Simple NN performs well on objects with small velocities, while the others perform better for higher velocities and when objects form clusters. This platform for benchmark images generation and image analysis algorithms testing is openly available at (http://griduni.uninova.pt/Clustergen/ClusterGen\_v1.0.zip).
Tracking of cells in a sequence of images using a low-dimension image representation
2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2008
We propose a new image analysis method to segment and track cells in a growing colony. By using an intermediate low-dimension image representation yielded by a reliable over-segmentation process, we combine the advantages of two-steps methods (possibility to check intermediate results) and the power of simultaneous segmentation and tracking algorithms, which are able to use temporal redundancy to resolve segmentation ambiguities. We improve and measure the tracking performances with a notion of decision risk derived from cell motion priors. Our algorithm permits to extract the complete lineage of a growing colony during up to seven generations without requiring user interaction.
Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics
In order to predict cell population behavior, it is important to understand the dynamic characteristics of individual cells. Individual induced pluripotent stem (iPS) cells in colonies have been difficult to track over long times, both because segmentation is challenging due to close proximity of cells and because cell morphology at the time of cell division does not change dramatically in phase contrast images; image features do not provide sufficient discrimination for 2D neural network models of label-free images. However, these cells do not move significantly during division, and they display a distinct temporal pattern of morphologies. As a result, we can detect cell division with images overlaid in time. Using a combination of a 3D neural network applied over time-lapse data to find regions of cell division activity, followed by a 2D neural network for images in these selected regions to find individual dividing cells, we developed a robust detector of iPS cell division. We created an initial 3D neural network to find 3D image regions in (x,y,t) in which identified cell divisions occurred, then used semi-supervised training with additional stacks of images to create a more refined 3D model. These regions were then inferenced with our 2D neural network to find the location and time immediately before cells divide when they contain two sets of chromatin, information needed to track the cells after division. False positives from the 3D inferenced results were identified and removed with the addition of the 2D model. We successfully identified 37 of the 38 cell division events in our manually annotated test image stack, and specified the time and (x,y) location of each cell just before division within an accuracy of 10 pixels.
Multi-feature contour evolution for automatic live cell segmentation in time lapse imagery
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference, 2008
Cell boundary segmentation in live cell image sequences is the first step towards quantitative analysis of cell motion and behavior. The time lapse microscopy imaging produces large volumes of image sequence collections which requires fast and robust automatic segmentation of cell boundaries to utilize further automated tools such as cell tracking to quantify and classify cell behavior. This paper presents a methodology that is based on utilizing the temporal context of the cell image sequences to accurately delineate the boundaries of non-homogeneous cells. A novel flux tensor-based detection of moving cells provides initial localization that is further refined by a multi-feature level set-based method using an efficient additive operator splitting scheme. The segmentation result is processed by a watershed-based algorithm to avoid merging boundaries of neighboring cells. By utilizing robust features, the level-set algorithm produces accurate segmentation for non-homogeneous cells ...
Tracking-Assisted Segmentation of Biological Cells
Cornell University - arXiv, 2019
U-Net and its variants have been demonstrated to work sufficiently well in biological cell tracking and segmentation. However, these methods still suffer in the presence of complex processes such as collision of cells, mitosis and apoptosis. In this paper, we augment U-Net with Siamese matching-based tracking and propose to track individual nuclei over time. By modelling the behavioural pattern of the cells, we achieve improved segmentation and tracking performances through a re-segmentation procedure. Our preliminary investigations on the Fluo-N2DH-SIM+ and Fluo-N2DH-GOWT1 datasets demonstrate that absolute improvements of up to 3.8 % and 3.4% can be obtained in segmentation and tracking accuracy, respectively.
Algorithms
Background: Time-lapse microscopy imaging is a key approach for an increasing number of biological and biomedical studies to observe the dynamic behavior of cells over time which helps quantify important data, such as the number of cells and their sizes, shapes, and dynamic interactions across time. Label-free imaging is an essential strategy for such studies as it ensures that native cell behavior remains uninfluenced by the recording process. Computer vision and machine/deep learning approaches have made significant progress in this area. Methods: In this review, we present an overview of methods, software, data, and evaluation metrics for the automatic analysis of label-free microscopy imaging. We aim to provide the interested reader with a unique source of information, with links for further detailed information. Results: We review the most recent methods for cell segmentation, event detection, and tracking. Moreover, we provide lists of publicly available software and datasets....