A Cervix Detection Driven Deep Learning Approach for Cow Heat Analysis from Endoscopic Images (original) (raw)

Udderly accurate: A deep learning based modelling for determination of dairyness of Sahiwal cow using computer vision

The Indian Journal of Animal Sciences, 2024

Dairy farming is a crucial agricultural practice for food and nutritional security. Selection of the best milching animals has always been a challenging task for optimising production efficiency. The efficacy of milk yield production is contingent upon a number of factors including inherent linear traits. Linear traits refer to the quantifiable physical characteristics that are associated with the production and reproduction capabilities of dairy animals. This article presents an innovative approach for the classification of Sahiwal cows into high, medium and low yielder categories based on images featuring linear traits through emerging deep learning and computer vision techniques. A large dataset of 4110 images highlighting important linear traits such as udder size, shape, and texture of different categories of Sahiwal cows has been created for training, validation and testing of the model. Images were collected from the herd of Sahiwal cows maintained at the National Dairy Research Institute, Karnal. The dataset was pre-processed using image augmentation techniques to enhance the model's robustness. Different architectures of CNN models namely InceptionV3, ResNet50 and GoogleNet were trained and optimised. The Inception V3 model demonstrated the best result with 85.64% testing accuracy among all these models in classifying the cow. The developed model can be used under field conditions to determine the dairyness of a cow in real time mode in place of human experts. Additionally, the model's interpretability is evaluated through feature visualisation, showcasing the importance of different udder features in milk yield prediction.

Development of an Automated Body Temperature Detection Platform for Face Recognition in Cattle with YOLO V3-Tiny Deep Learning and Infrared Thermal Imaging

Applied Sciences

This study developed an automated temperature measurement and monitoring platform for dairy cattle. The platform used the YOLO V3-tiny (you only look once, YOLO) deep learning algorithm to identify and classify dairy cattle images. The system included a total of three layers of YOLO V3-tiny identification: (1) dairy cow body; (2) individual number (identity, ID); (3) thermal image of eye socket identification. We recorded each cow’s individual number and body temperature data after the three layers of identification, and carried out long-term body temperature tracking. The average prediction score of the recognition rate was 96%, and the accuracy was 90.0%. The thermal image of eye socket recognition rate was >99%. The area under the receiver operating characteristic curves (AUC) index of the prediction model was 0.813 (0.717–0.910). This showed that the model had excellent predictive ability. This system provides a rapid and convenient temperature measurement solution for ranche...

Validation of a deep learning-based image analysis system to diagnose subclinical endometritis in dairy cows

PLOS ONE, 2022

The assessment of polymorphonuclear leukocyte (PMN) proportions (%) of endometrial samples is the hallmark for subclinical endometritis (SCE) diagnosis. Yet, a non-biased, automated diagnostic method for assessing PMN% in endometrial cytology slides has not been validated so far. We aimed to validate a computer vision software based on deep machine learning to quantify the PMN% in endometrial cytology slides. Uterine cytobrush samples were collected from 116 postpartum Holstein cows. After sampling, each cytobrush was rolled onto three different slides. One slide was stained using Diff-Quick, while a second was stained using Naphthol (golden standard to stain PMN). One single observer evaluated the slides twice at different days under light microscopy. The last slide was stained with a fluorescent dye, and the PMN% were assessed twice by using a fluorescence microscope connected to a smartphone. Fluorescent images were analyzed via the Oculyze Monitoring Uterine Health (MUH) system,...

Performing and Evaluation of Deep Learning Models for Uterus Detection on Soft-tissue Cadavers in Laparoscopic Gynecology

IEEE Access

Nowadays, with the current technological forces that have been shaping our bright future, one of these is Computer Vision. And this statement is true across various matters, including laparoscopic gynecology, where computer-aided procedures for object recognition could offer surgeons the opportunity to ease up on ongoing surgeries and/or to practice their surgical skills with offline surgeries. However, most of the previous work has been retrospective and focused on methodology from a computational viewpoint with minimal datasets showing how Computer Vision can be utilized for laparoscopic surgery. The main purpose of this paper is not just to evaluate state-of-the-art object detection models for uterus detection, but also to emphasize clinical application via the collaboration between surgeons and peopleware which is important in the further development and adoption of this technology, leading to improved clinical outcomes in Laparoscopic Gynecology. Two experiment phases have been conducted. Phase#1 applied 8 different Deep Learning models for uterus detection and were tested on the dataset, obtained from 40 public YouTube videos in Laparoscopic Gynecologic Surgery. In order to prove this new technology before performing on patients, and also due to the ethics of human experimentation, extensive testing on soft-tissue cadavers has been used, hence Phase#2 which performed the best models from the first experiment phase serving a real-time streaming feed during 4 soft-tissue cadaver laparoscopic surgeries, which theoretically is the best approach as they are the closest to humans in terms of shape and structure. Four models, pre-trained on the COCO 2017 Dataset on TensorFlow Model Zoo: CenterNet; EfficientDet; SSD; and Faster R-CNN; plus YOLOv4 on Darknet Framework, along with YOLOv4, YOLOv5 and YOLOv7 on Pytorch have been scrutinized here. The inference time (in FPS: Frame Per Second), F1-score and AP (Average Precision) have been used as evaluation metrics. The results exhibited that all 3 YOLOs on PyTorch outperformed all effectiveness metrics, including with great inference speed which is suitable for real-time surgeries. Lastly, a by-product but also useful contribution of this work, is the annotated dataset on uterus detection from both public videos and live feed on cadaver surgeries.

Body condition estimation on cows from depth images using Convolutional Neural Networks

Computers and Electronics in Agriculture

BCS ("Body Condition Score") is a method used to estimate body fat reserves and accumulated energy balance of cows. BCS heavily influences milk production, reproduction, and health of cows. Therefore, it is important to monitor BCS to achieve better animal response, but this is a time-consuming and subjective task performed visually by expert scorers. Several studies have tried to automate BCS of dairy cows by applying image analysis and machine learning techniques. This work analyzes these studies and proposes a system based on Convolutional Neural Networks (CNNs) to improve overall automatic BCS estimation, whose use might be extended beyond dairy production. The developed system has achieved good estimation results in comparison with other systems in the area. Overall accuracy of BCS estimations within 0.25 units of difference from true values was 78%, while overall accuracy within 0.50 units was 94%. Similarly, weighted precision and recall, which took into account imbalance BCS distribution in the built dataset, show similar values considering those error ranges. 1. uses images as the only information source (without external data such as weight, age, or lactation stage of the cow), 2. uses low-cost hardware resources, 3. gets real-time estimations,

Refinement of Methodology for Better Estimation of Pregnancy Diagnosis in Macaca fascicularis by DeepComputational Analysis of The Thermal Images

Jurnal Veteriner, 2021

The current use of thermal imaging has been documented in wild animals due to the benefit for having real-time results with less or almost no restrain or invasive methods required - and this is significant for better well-being. This paper will explore the thermal imaging studies as a part of employing non-invasive methods in evaluating physiological function, in particular with refinement of the methods, followed by further computational analysis of the images to ensure the validity of the methods as predictive tools for pregnancy diagnosis. We conducted refinements in thermal imaging methods and computational analysis of deep learning for pregnancy diagnosis of cynomolgus monkeys (Macaca fascicularis) at breeding facility of The Primate Research Center, LPPM IPB University. Subjects were already identified by ultrasound as pregnant in 80, 120 and 130 days. Thermal images along with the temperature data were obtained from FLIR ONE camera in sedated animals with dorso-ventral recumb...

Deep Learning Based Egg Fertility Detection

Veterinary Sciences

This study investigates the implementation of deep learning (DL) approaches to the fertile egg-recognition problem, based on incubator images. In this study, we aimed to classify chicken eggs according to both segmentation and fertility status with a Mask R-CNN-based approach. In this manner, images can be handled by a single DL model to successfully perform detection, classification and segmentation of fertile and infertile eggs. Two different test processes were used in this study. In the first test application, a data set containing five fertile eggs was used. In the second, testing was carried out on the data set containing 18 fertile eggs. For evaluating this study, we used AP, one of the most important metrics for evaluating object detection and segmentation models in computer vision. When the results obtained were examined, the optimum threshold value (IoU) value was determined as 0.7. According to the IoU of 0.7, it was observed that all fertile eggs in the incubator were de...

Image analysis for individual identification and feeding behaviour monitoring of dairy cows based on Convolutional Neural Networks (CNN)

Biosystems Engineering, 2020

Dairy cow identification Feeding behaviour Deep learning Convolutional neural network In precision livestock farming, individual identification and analysis of feeding behaviour have a great impact on optimising the productivity and improving health monitoring. The sensors usually used to measure several parameters from an individual dairy cow (RFID, Accelerometer, etc.) are invasive, uncomfortable and stressful for animals. To overcome these limits, we have developed a non-invasive system based entirely on image analysis. The top of the dairy cow's head image, captured in a dairy cow farm, was used as a Region of Interest (ROI) and different classifiers based on Convolutional Neural Network (CNN) model are used to monitor the feeding behaviour and perform individual identification of seventeen Holstein dairy cows. We use one CNN to detect the dairy cow presence in the feeder zone. A second CNN is used to determine the dairy cow position in front of the feeder, standing or feeding. A third CNN is used to check the availability of food in the feeder and if so recognise the food category. The last CNN is devoted to individual identification of the dairy cow. Furthermore, we also explore the contribution of a CNN coupled to Support Vector Machine (SVM) and the combination of multiple CNNs in the individual identification process. For the evaluation step, we have used a dataset composed of 7265 images of 17 Holstein dairy cows during feeding periods on a commercial farm. Results show that our method yields high scores in each step of our algorithm.

A Review on Recent Progress in Thermal Imaging and Deep Learning Approaches for Breast Cancer Detection

IEEE Access

Developing a breast cancer screening method is very important to facilitate early breast cancer detection and treatment. Building a screening method using medical imaging modality that does not cause body tissue damage (non-invasive) and does not involve physical touch is challenging. Thermography, a non-invasive and non-contact cancer screening method, can detect tumors at an early stage even under precancerous conditions by observing temperature distribution in both breasts. The thermograms obtained on thermography can be interpreted using deep learning models such as convolutional neural networks (CNNs). CNNs can automatically classify breast thermograms into categories such as normal and abnormal. Despite their demostrated utility, CNNs have not been widely used in breast thermogram classification. In this study, we aimed to summarize the current work and progress in breast cancer detection based on thermography and CNNs. We first discuss of breast thermography potential in early breast cancer detection, providing an overview of the availability of breast thermal datasets together with publicly accessible. We also discuss characteristics of breast thermograms and the differences between healthy and cancerous thermographic patterns. Breast thermogram classification using a CNN model is described step by step including a simulation example illustrating feature learning. We cover most research related to the implementation of deep neural networks for breast thermogram classification and propose future research directions for developing representative datasets, feeding the segmented image, assigning a good kernel, and building a lightweight CNN model to improve CNN performance. INDEX TERMS Breast cancer, convolutional neural network, deep learning, early detection, thermogram.

Detection and Classification of Pregnancy StateUsing Deep Learning Technique

Omdurman Islamic University Journal, 2021

This work aims to design and develop a model that detects and classifies pregnancy health status. Ultrasound is one of the most prevalent developments in clinical imaging, as it enables a doctor to evaluate, analyze and treat diseases. Most complications from pregnancy lead to serious problems that restrict healthy growth, causing weakness or death. In this work, an image processing system was developed to recognize the health during pregnancy and classify it for all stages of its development. The technique in deep learning has been implemented, as CNN (Resnet50) image recognition model was applied to detect and classify fetal health status from ultrasound images. The proposed model contributed to providing an integrated solution for each pregnancy period that works to identify all stages of fetal development, starting from the pre-pregnancy stage (here it is known about the suitability of the uterus for pregnancy, the size of the ovum, and its ability to form the fetus) and up to...