From Petri Dishes to Organ on Chip Platform: The Increasing Importance of Machine Learning and Image Analysis (original) (raw)

The fusion of microfluidics and artificial intelligence: a novel alliance for medical advancements

Bioanalysis, 2024

This article explores the integration of microfluidics and artificial intelligence (AI), highlighting the significant advancements in medical technologies achieved through this novel alliance. Microfluidics, which involves the manipulation of fluids at a microscale, has revolutionized biological experiments by enabling precise sample handling and reducing experimental volumes. The fusion with AI, particularly machine learning (ML) and deep learning, enhances the efficiency and capabilities of microfluidic devices in diverse applications, including diagnostics, biomarker profiling, and image processing. AI algorithms facilitate high-throughput analysis, automate tasks, and improve image reconstruction, making microfluidic experiments more precise and faster. Additionally, AI-driven microrobots demonstrate promising applications in targeted drug delivery and precision medicine. The convergence of these technologies is anticipated to drive innovations in personalized healthcare and biomedical engineering, paving the way for advanced microfluidic devices and real-world applications.

Applications of machine learning for simulations of red blood cells in microfluidic devices

BMC Bioinformatics

Background For optimization of microfluidic devices for the analysis of blood samples, it is useful to simulate blood cells as elastic objects in flow of blood plasma. In such numerical models, we primarily need to take into consideration the movement and behavior of the dominant component of the blood, the red blood cells. This can be done quite precisely in small channels and within a short timeframe. However, larger volumes or timescales require different approaches. Instead of simplifying the simulation, we use a neural network to predict the movement of the red blood cells. Results The neural network uses data from the numerical simulation for learning, however, the simulation needs only be run once. Alternatively, the data could come from video processing of a recording of a biological experiment. Afterwards, the network is able to predict the movement of the red blood cells because it is a system of bases that gives an approximate cell velocity at each point of the simulation...

Bioimage informatics TIMING 2.0: high-throughput single-cell profiling of dynamic cell-cell interactions by time-lapse imaging microscopy in nanowell grids

Bioimage informatics, 2018

Motivation: Automated profiling of cell-cell interactions from high-throughput time-lapse imaging microscopy data of cells in nanowell grids (TIMING) has led to fundamental insights into cell-cell interactions in immunotherapy. This application note aims to enable widespread adoption of TIMING by (i) enabling the computations to occur on a desktop computer with a graphical processing unit instead of a server; (ii) enabling image acquisition and analysis to occur in the laboratory avoiding network data transfers to/from a server and (iii) providing a comprehensive graphical user interface. Results: On a desktop computer, TIMING 2.0 takes 5 s/block/image frame, four times faster than our previous method on the same computer, and twice as fast as our previous method (TIMING) running on a Dell PowerEdge server. The cell segmentation accuracy (f-number = 0.993) is superior to our previous method (f-number ΒΌ 0.821). A graphical user interface provides the ability to inspect the video analysis results, make corrective edits efficiently (one-click editing of an entire nanowell video sequence in 5-10 s) and display a summary of the cell killing efficacy measurements. Availability and implementation: Open source Python software (GPL v3 license), instruction manual , sample data and sample results are included with the Supplement (https://github.com/ RoysamLab/TIMING2).

A contact-imaging based microfluidic cytometer with machine-learning for single-frame super-resolution processing

PloS one, 2014

Lensless microfluidic imaging with super-resolution processing has become a promising solution to miniaturize the conventional flow cytometer for point-of-care applications. The previous multi-frame super-resolution processing system can improve resolution but has limited cell flow rate and hence low throughput when capturing multiple subpixel-shifted cell images. This paper introduces a single-frame super-resolution processing with on-line machine-learning for contact images of cells. A corresponding contact-imaging based microfluidic cytometer prototype is demonstrated for cell recognition and counting. Compared with commercial flow cytometer, less than 8% error is observed for absolute number of microbeads; and 0.10 coefficient of variation is observed for cell-ratio of mixed RBC and HepG2 cells in solution.

Machine Learning for Lifespan Inference from Time-Lapse Microfluidic Images of Dividing Yeast Cells

2021

High-throughput microfluidics-based assays can potentially increase the speed and quality of yeast replicative lifespan measurements that are related to aging. One major challenge is to efficiently convert large volumes of time-lapse images into quantitative measurements of yeast cell lifespan measurements. To address these issues, we developed several deep learning methods to analyze a large number of images collected from microfluidic experiments. First, we compared three deep learning architectures to classify microfluidic time-lapse images of dividing yeast cells into categories that represent different stages in the yeast replicative aging process. Second, we evaluated convolutional neural networks for detecting cells from microfluidic images. The YOLO and Mask R-CNN are trained with yeast microfluidic images and tested for object detection, and features extraction. The results indicate that YOLO had better performance in terms of object detection and accuracy. In contrast, the...

A neural network approach for real-time particle/cell characterization in microfluidic impedance cytometry

Analytical and Bioanalytical Chemistry, 2020

Microfluidic applications such as active particle sorting or selective enrichment require particle classification techniques that are capable of working in real-time. In this paper, we explore the use of neural networks for fast label-free particle characterization during microfluidic impedance cytometry. A recurrent neural network is designed to process data from a novel impedance chip layout for enabling real-time multi-parametric analysis of the measured impedance data streams. As demonstrated with both synthetic and experimental datasets, the trained network is able to characterize with good accuracy size, velocity and cross-sectional position of beads, red blood cells and yeasts, with a unitary prediction time of 0.4 ms. The proposed approach can be extended to other device designs and cell types for electrical-parameter extraction. This combination of microfluidic impedance cytometry and machine learning can serve as a stepping stone to real time single-cell analysis and sorting.