Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs (original) (raw)

Model selection for 24/7 pig position and posture detection by 2D camera imaging and deep learning

Computers and Electronics in Agriculture, 2021

Continuous monitoring of pig posture is important for better understanding animal behavior. Previous studies focused on day recordings and did not investigate how deep learning models could be applied during longer periods including night recordings under near-infrared light from several pens. Therefore, the objective of this research was to study how a suitable deep learning model for continuous 24/7 pig posture detection could be achieved. We selected a deep learning model from over 150 different model configurations covering experiments concerning 3 detection heads, 4 base networks, 5 transfer datasets and 12 data augmentations. For this purpose, we test and validate our models using 4690 annotations of randomly drawn images from 24/7 video recordings covering 2 fattening periods from 10 pens. Our results indicate that pig position and posture was detected on the test set with 84% mAP@0.50 (49% mAP@[0.50:0.05:0.95]) for day recordings and for night recordings 58% mAP@0.50 (29% mAP@[0.50:0.05:0.95]) was achieved. The main reason for lower mAP during night recordings was degraded near-infrared image quality. Our work reports important findings concerning the applicability of deep learning models on night near-infrared recordings for posture detection. The dataset is publicly available for further research and industrial applications.

Automatically detecting pig position and posture by 2D camera imaging and deep learning

Computers and Electronics in Agriculture, 2020

Prior livestock research provides evidence for the importance of accurate detection of pig positions and postures for better understanding animal welfare. Position and posture detection can be accomplished by machine vision systems. However, current machine vision systems require rigid setups of fixed vertical lighting, vertical topview camera perspectives or complex camera systems, which hinder their adoption in practice. Moreover, existing detection systems focus on specific pen contexts and may be difficult to apply in other livestock facilities. Our main contribution is twofold: First, we design a deep learning system for position and posture detection that only requires standard 2D camera imaging with no adaptations to the application setting. This deep learning system applies the state-of-the-art Faster R-CNN object detection pipeline and the state-of-the-art Neural Architecture Search (NAS) base network for feature extraction. Second, we provide a labelled open access dataset with 7277 human-made annotations from 21 standard 2D cameras, covering 31 different one-hour long video recordings and 18 different pens to train and test the approach under realistic conditions. On unseen pens under similar experimental conditions with sufficient similar training images of pig fattening, the deep learning system detects pig position with an Average Precision (AP) of 87.4%, and pig position and posture with a mean Average Precision (mAP) of 80.2%. Given different and more difficult experimental conditions of pig rearing with no or little similar images in the training set, an AP of over 67.7% was achieved for position detection. However, detecting the position and posture achieved a mAP between 44.8% and 58.8% only. Furthermore, we demonstrate exemplary applications that can aid pen design by visualizing where pigs are lying and how their lying behavior changes through the day. Finally, we contribute open data that can be used for further studies, replication, and pig position detection applications.

Classification of Sow Postures Using Convolutional Neural Network and Depth images

ASABE, 2024

The US swine industry reports an average preweaning mortality of approximately 16% where approximately 6% of them are attributed to piglets overlayed by sows. Detecting postural transitions and estimating sows' time budgets for different postures are valuable information for breeders and engineering design of farrowing facilities to eventually reduce piglet death. Computer vision tools can help monitor changes in animal posture accurately and efficiently. To create a more robust system and eliminate varying lighting issues within a day including daytime/ nighttime differences, there is an advantage to using depth cameras over digital cameras. In this study, a computer vision system was used for continuous depth image acquisition in several farrowing crates. The images were captured by top down view Kinect v2 depth sensors in the crates at 10 frames per minute for 24 h. The captured depth images were converted into Jet colormap images. A total of 14277 images from six different sows from 18 different days were randomly selected and labeled into six posture categories (standing, kneeling, sitting, sternal lying, lying on the right and lying on the left). The Convolutional Neural Network (CNN) architectures, that is, Resnet-50, Inception v3 with 'imagenet' pre-trained weight, were used for model training and posture images were tested. The dataset was randomly split training (75%) and validation (roughly 25%) sets. For testing, another dataset with 2885 images obtained from six different sows (from 12 different days) was labelled. Among the models tested in the test dataset, the Inception v3 model outperformed all the models, resulting in 95% accuracy in predicting sow postures. We found an F1 score between 0.90 and 1.00 for all postures except the kneeling posture (F1=0.81) since this is a transition posture. This preliminary result indicates the potential use of transfer learning models for this specific task. This result also indicates that depth images are suitable for identifying the postures of sows. The outcome of this study will lead to the identification and generation of posture data in a commercial farm scale to study the behavioral differences of sows within different characteristics of farm facilities, health status, mortality rates, and overall production parameters.

Evaluating 2D vs 3D Neural Network Efficacy in Sow Posture Detection With Synthetic Data

Austrian Association for Pattern Recognition (OAGM) Workshop, 2023

This paper presents a novel method for classifying postural behaviour in sows using synthetic data from Unreal Engine 5 and UnrealGT combined with a 3D neural network. Traditional 2D CNNs, like YOLO, are not designed to use depth information, and are sensitive to camera angles and lighting variations. These issues can compromise accuracy in diverse environments such as animal farms. Our approach overcomes these by using depth information and generating synthetic data with variation of scene parameters. We employed the Samsung Labs TR3D Network for object detection due to its proven capabilities in 3D object detection with the SUNRGB-D Dataset. Our findings highlight the benefits of synthetic data and the potential of 3D neural networks in complex environments, setting a direction for future research.

DigiPig: First Developments of an Automated Monitoring System for Body, Head and Tail Detection in Intensive Pig Farming

Agriculture

The goal of this study was to develop an automated monitoring system for the detection of pigs’ bodies, heads and tails. The aim in the first part of the study was to recognize individual pigs (in lying and standing positions) in groups and their body parts (head/ears, and tail) by using machine learning algorithms (feature pyramid network). In the second part of the study, the goal was to improve the detection of tail posture (tail straight and curled) during activity (standing/moving around) by the use of neural network analysis (YOLOv4). Our dataset (n = 583 images, 7579 pig posture) was annotated in Labelbox from 2D video recordings of groups (n = 12–15) of weaned pigs. The model recognized each individual pig’s body with a precision of 96% related to threshold intersection over union (IoU), whilst the precision for tails was 77% and for heads this was 66%, thereby already achieving human-level precision. The precision of pig detection in groups was the highest, while head and t...

StaticPigDet: Accuracy Improvement of Static Camera-Based Pig Monitoring Using Background and Facility Information

Sensors

The automatic detection of individual pigs can improve the overall management of pig farms. The accuracy of single-image object detection has significantly improved over the years with advancements in deep learning techniques. However, differences in pig sizes and complex structures within pig pen of a commercial pig farm, such as feeding facilities, present challenges to the detection accuracy for pig monitoring. To implement such detection in practice, the differences should be analyzed by video recorded from a static camera. To accurately detect individual pigs that may be different in size or occluded by complex structures, we present a deep-learning-based object detection method utilizing generated background and facility information from image sequences (i.e., video) recorded from a static camera, which contain relevant information. As all images are preprocessed to reduce differences in pig sizes. We then used the extracted background and facility information to create differ...

EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection

Applied Sciences, 2021

Automated pig monitoring is important for smart pig farms; thus, several deep-learning-based pig monitoring techniques have been proposed recently. In applying automated pig monitoring techniques to real pig farms, however, practical issues such as detecting pigs from overexposed regions, caused by strong sunlight through a window, should be considered. Another practical issue in applying deep-learning-based techniques to a specific pig monitoring application is the annotation cost for pig data. In this study, we propose a method for managing these two practical issues. Using annotated data obtained from training images without overexposed regions, we first generated augmented data to reduce the effect of overexposure. Then, we trained YOLOv4 with both the annotated and augmented data and combined the test results from two YOLOv4 models in a bounding box level to further improve the detection accuracy. We propose accuracy metrics for pig detection in a closed pig pen to evaluate the...

Body condition estimation on cows from depth images using Convolutional Neural Networks

Computers and Electronics in Agriculture

BCS ("Body Condition Score") is a method used to estimate body fat reserves and accumulated energy balance of cows. BCS heavily influences milk production, reproduction, and health of cows. Therefore, it is important to monitor BCS to achieve better animal response, but this is a time-consuming and subjective task performed visually by expert scorers. Several studies have tried to automate BCS of dairy cows by applying image analysis and machine learning techniques. This work analyzes these studies and proposes a system based on Convolutional Neural Networks (CNNs) to improve overall automatic BCS estimation, whose use might be extended beyond dairy production. The developed system has achieved good estimation results in comparison with other systems in the area. Overall accuracy of BCS estimations within 0.25 units of difference from true values was 78%, while overall accuracy within 0.50 units was 94%. Similarly, weighted precision and recall, which took into account imbalance BCS distribution in the built dataset, show similar values considering those error ranges. 1. uses images as the only information source (without external data such as weight, age, or lactation stage of the cow), 2. uses low-cost hardware resources, 3. gets real-time estimations,

Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review

Sensors

Convolutional neural network (CNN)-based computer vision systems have been increasingly applied in animal farming to improve animal management, but current knowledge, practices, limitations, and solutions of the applications remain to be expanded and explored. The objective of this study is to systematically review applications of CNN-based computer vision systems on animal farming in terms of the five deep learning computer vision tasks: image classification, object detection, semantic/instance segmentation, pose estimation, and tracking. Cattle, sheep/goats, pigs, and poultry were the major farm animal species of concern. In this research, preparations for system development, including camera settings, inclusion of variations for data recordings, choices of graphics processing units, image preprocessing, and data labeling were summarized. CNN architectures were reviewed based on the computer vision tasks in animal farming. Strategies of algorithm development included distribution ...

Image analysis for individual identification and feeding behaviour monitoring of dairy cows based on Convolutional Neural Networks (CNN)

Biosystems Engineering, 2020

Dairy cow identification Feeding behaviour Deep learning Convolutional neural network In precision livestock farming, individual identification and analysis of feeding behaviour have a great impact on optimising the productivity and improving health monitoring. The sensors usually used to measure several parameters from an individual dairy cow (RFID, Accelerometer, etc.) are invasive, uncomfortable and stressful for animals. To overcome these limits, we have developed a non-invasive system based entirely on image analysis. The top of the dairy cow's head image, captured in a dairy cow farm, was used as a Region of Interest (ROI) and different classifiers based on Convolutional Neural Network (CNN) model are used to monitor the feeding behaviour and perform individual identification of seventeen Holstein dairy cows. We use one CNN to detect the dairy cow presence in the feeder zone. A second CNN is used to determine the dairy cow position in front of the feeder, standing or feeding. A third CNN is used to check the availability of food in the feeder and if so recognise the food category. The last CNN is devoted to individual identification of the dairy cow. Furthermore, we also explore the contribution of a CNN coupled to Support Vector Machine (SVM) and the combination of multiple CNNs in the individual identification process. For the evaluation step, we have used a dataset composed of 7265 images of 17 Holstein dairy cows during feeding periods on a commercial farm. Results show that our method yields high scores in each step of our algorithm.