Automatically detecting pig position and posture by 2D camera imaging and deep learning (original) (raw)

Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs

Sensors

Posture detection targeted towards providing assessments for the monitoring of health and welfare of pigs has been of great interest to researchers from different disciplines. Existing studies applying machine vision techniques are mostly based on methods using three-dimensional imaging systems, or two-dimensional systems with the limitation of monitoring under controlled conditions. Thus, the main goal of this study was to determine whether a two-dimensional imaging system, along with deep learning approaches, could be utilized to detect the standing and lying (belly and side) postures of pigs under commercial farm conditions. Three deep learning-based detector methods, including faster regions with convolutional neural network features (Faster R-CNN), single shot multibox detector (SSD) and region-based fully convolutional network (R-FCN), combined with Inception V2, Residual Network (ResNet) and Inception ResNet V2 feature extractions of RGB images were proposed. Data from differ...

Model selection for 24/7 pig position and posture detection by 2D camera imaging and deep learning

Computers and Electronics in Agriculture, 2021

Continuous monitoring of pig posture is important for better understanding animal behavior. Previous studies focused on day recordings and did not investigate how deep learning models could be applied during longer periods including night recordings under near-infrared light from several pens. Therefore, the objective of this research was to study how a suitable deep learning model for continuous 24/7 pig posture detection could be achieved. We selected a deep learning model from over 150 different model configurations covering experiments concerning 3 detection heads, 4 base networks, 5 transfer datasets and 12 data augmentations. For this purpose, we test and validate our models using 4690 annotations of randomly drawn images from 24/7 video recordings covering 2 fattening periods from 10 pens. Our results indicate that pig position and posture was detected on the test set with 84% mAP@0.50 (49% mAP@[0.50:0.05:0.95]) for day recordings and for night recordings 58% mAP@0.50 (29% mAP@[0.50:0.05:0.95]) was achieved. The main reason for lower mAP during night recordings was degraded near-infrared image quality. Our work reports important findings concerning the applicability of deep learning models on night near-infrared recordings for posture detection. The dataset is publicly available for further research and industrial applications.

StaticPigDet: Accuracy Improvement of Static Camera-Based Pig Monitoring Using Background and Facility Information

Sensors

The automatic detection of individual pigs can improve the overall management of pig farms. The accuracy of single-image object detection has significantly improved over the years with advancements in deep learning techniques. However, differences in pig sizes and complex structures within pig pen of a commercial pig farm, such as feeding facilities, present challenges to the detection accuracy for pig monitoring. To implement such detection in practice, the differences should be analyzed by video recorded from a static camera. To accurately detect individual pigs that may be different in size or occluded by complex structures, we present a deep-learning-based object detection method utilizing generated background and facility information from image sequences (i.e., video) recorded from a static camera, which contain relevant information. As all images are preprocessed to reduce differences in pig sizes. We then used the extracted background and facility information to create differ...

Tracking Grow-Finish Pigs Across Large Pens Using Multiple Cameras

2021

Increasing demand for meat products combined with farm labour shortages has resulted in a need to develop new realtime solutions to monitor animals effectively. Significant progress has been made in continuously locating individual pigs using tracking-by-detection methods. However, these methods fail for oblong pens because a single fixed camera does not cover the entire floor at adequate resolution. We address this problem by using multiple cameras, placed such that the visual fields of adjacent cameras overlap, and together they span the entire floor. Avoiding breaks in tracking requires inter-camera handover when a pig crosses from one camera’s view into that of an adjacent camera. We identify the adjacent camera and the shared pig location on the floor at the handover time using inter-view homography. Our experiments involve two grow-finish pens, housing 16-17 pigs each, and three RGB cameras. Our algorithm first detects pigs using a deep learning-based object detection model (Y...

Robust individual pig tracking

International Journal of Electrical and Computer Engineering (IJECE), 2024

The locations of pigs in the group housing enable activity monitoring and improve animal welfare. Vision-based methods for tracking individual pigs are noninvasive but have low tracking accuracy owing to long-term pig occlusion. In this study, we developed a vision-based method that accurately tracked individual pigs in group housing. We prepared and labeled datasets taken from an actual pig farm, trained a faster region-based convolutional neural network to recognize pigs' bodies and heads, and tracked individual pigs across video frames. To quantify the tracking performance, we compared the proposed method with the global optimization (GO) method with the cost function and the simple online and real-time tracking (SORT) method on four additional test datasets that we prepared, labeled, and made publicly available. The predictive model detects pigs' bodies accurately, with F1-scores of 0.75 to 1.00, on the four test datasets. The proposed method achieves the largest multi-object tracking accuracy (MOTA) values at 0.75, 0.98, and 1.00 for three test datasets. In the remaining dataset, the proposed method has the second-highest MOTA of 0.73. The proposed tracking method is robust to long-term occlusion, outperforms the competitive baselines in most datasets, and has practical utility in helping to track individual pigs accurately.

Automatic Individual Pig Detection and Tracking in Pig Farms

Sensors

Individual pig detection and tracking is an important requirement in many video-based pig monitoring applications. However, it still remains a challenging task in complex scenes, due to problems of light fluctuation, similar appearances of pigs, shape deformations, and occlusions. In order to tackle these problems, we propose a robust on-line multiple pig detection and tracking method which does not require manual marking or physical identification of the pigs and works under both daylight and infrared (nighttime) light conditions. Our method couples a CNN-based detector and a correlation filter-based tracker via a novel hierarchical data association algorithm. In our method, the detector gains the best accuracy/speed trade-off by using the features derived from multiple layers at different scales in a one-stage prediction network. We define a tag-box for each pig as the tracking target, from which features with a more local scope are extracted for learning, and the multiple object ...

DigiPig: First Developments of an Automated Monitoring System for Body, Head and Tail Detection in Intensive Pig Farming

Agriculture

The goal of this study was to develop an automated monitoring system for the detection of pigs’ bodies, heads and tails. The aim in the first part of the study was to recognize individual pigs (in lying and standing positions) in groups and their body parts (head/ears, and tail) by using machine learning algorithms (feature pyramid network). In the second part of the study, the goal was to improve the detection of tail posture (tail straight and curled) during activity (standing/moving around) by the use of neural network analysis (YOLOv4). Our dataset (n = 583 images, 7579 pig posture) was annotated in Labelbox from 2D video recordings of groups (n = 12–15) of weaned pigs. The model recognized each individual pig’s body with a precision of 96% related to threshold intersection over union (IoU), whilst the precision for tails was 77% and for heads this was 66%, thereby already achieving human-level precision. The precision of pig detection in groups was the highest, while head and t...

EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection

Applied Sciences, 2021

Automated pig monitoring is important for smart pig farms; thus, several deep-learning-based pig monitoring techniques have been proposed recently. In applying automated pig monitoring techniques to real pig farms, however, practical issues such as detecting pigs from overexposed regions, caused by strong sunlight through a window, should be considered. Another practical issue in applying deep-learning-based techniques to a specific pig monitoring application is the annotation cost for pig data. In this study, we propose a method for managing these two practical issues. Using annotated data obtained from training images without overexposed regions, we first generated augmented data to reduce the effect of overexposure. Then, we trained YOLOv4 with both the annotated and augmented data and combined the test results from two YOLOv4 models in a bounding box level to further improve the detection accuracy. We propose accuracy metrics for pig detection in a closed pig pen to evaluate the...

Automatic Behavior and Posture Detection of Sows in Loose Farrowing Pens Based on 2D-Video Images

Frontiers in Animal Science, 2021

The monitoring of farm animals and the automatic recognition of deviant behavior have recently become increasingly important in farm animal science research and in practical agriculture. The aim of this study was to develop an approach to automatically predict behavior and posture of sows by using a 2D image-based deep neural network (DNN) for the detection and localization of relevant sow and pen features, followed by a hierarchical conditional statement based on human expert knowledge for behavior/posture classification. The automatic detection of sow body parts and pen equipment was trained using an object detection algorithm (YOLO V3). The algorithm achieved an Average Precision (AP) of 0.97 (straw rack), 0.97 (head), 0.95 (feeding trough), 0.86 (jute bag), 0.78 (tail), 0.75 (legs) and 0.66 (teats). The conditional statement, which classifies and automatically generates a posture or behavior of the sow under consideration of context, temporal and geometric values of the detected...

Development of an Automatic Pig Detection and Tracking System Using Machine Learning

Georg-August-Universität Göttingen, 2020

Due to the fact that the internal state of the animals is being conveyed through their behaviour, it is possible to identify the early signs of any issues such as biting or fighting by monitoring the changes in the known behaviour. However, it is impractical in a commercial establishment to perform continuous inspection of farm animals by the staff to identify behavioural alterations pertinent to over-early intervention. To address this problem, a system has been designed for the pig detection, as well as, tracking their movement in a pen environment based on video recordings. Furthermore, the animal specific interactions have been estimated from the 2D trajectories produced by the proposed multi-object tracker. The work has been divided into four phases which are annotation, detection, tracking and interaction. Among them the detection phase is crucial for this system, as rest of the phases mainly depend on it. In this work, a recall of 91% and precision of 94% have been achieved during the detection process. The tracking of the pigs has been performed using a Kalman filter based multi-object tracker. The Kalman filter is used to determine the states of a process recursively using a collection of equations in such a manner that mean squared error is minimized. Due to its ability to estimate past, present and future states to some extent, although the characteristics of the system is unfamiliar, the Kalman filter is frequently applied for the object tracking. This work has ended with the fourth phase which is detecting the interaction among the pigs in the pen environment. Currently, head-to-head and head to tail interactions have been calculated from the trajectories and put into a table. This table can be used to do behaviour analysis of the pigs which opens up various possibilities for further research work in terms of improving their health and welfare. As well as, it shows a greater potential to change the livestock monitoring in a commercial farm.