A New Acoustical Autonomous Method for Identifying Endangered Whale Calls: A Case Study of Blue Whale and Fin Whale (original) (raw)
Related papers
6th Underwater Acoustics Conference and Exhibition
The application of deep learning to solving acoustic detection and identification challenges is a rapidly-evolving subfield of underwater acoustics. Automatic signal identification can be used for many applications, like enabling the compilation of large datasets from many sources, which can be used to better constrain source-specific characteristics and trends. Earlier analyses (Garibbo et al., 2020) identified the different contributions of wind, weather, shipping and earthquakes. The long-term acoustic measurements regularly include calls from fin whales, whose presence and vocal activities in the area vary with seasons; their 20-Hz calls are sometimes mixed with other signals, like earthquakes or shipping. We present here the application of deep learning to automatically identify these whale calls. Percentile analyses of the temporal variation of the frequency of calls, their Power Spectral Density (PSD), and Sound Pressure Level (SPL) is carried out to determine their respective contributions to the overall soundscape and highlight relevant information about these whale populations. The deep learning approaches selected here can also be used for other types of animal vocalisations and for other short-term processes (e.g. passing ships, earthquakes of different types), assisting in their identification and in the statistical and temporal analyses of low-frequency soundscapes.
North Atlantic Right Whales Up-call Detection Using Multimodel Deep Learning
arXiv (Cornell University), 2020
A new method for North Atlantic Right Whales (NARW) up-call detection using Multimodel Deep Learning (MMDL) is presented in this paper. In this approach, signals from passive acoustic sensors are first converted to spectrogram and scalogram images, which are time-frequency representations of the signals. These images are in turn used to train an MMDL detector, consisting of Convolutional Neural Networks (CNNs) and Stacked Auto Encoders (SAEs). Our experimental studies revealed that CNNs work better with spectrograms and SAEs with scalograms. Therefore in our experimental design, the CNNs are trained by using spectrogram images, and the SAEs are trained by using scalogram images. A fusion mechanism is used to fuse the results from individual neural networks. In this paper, the results obtained from the MMDL detector are compared with those obtained from conventional machine learning algorithms trained with handcraft features. It is shown that the performance of the MMDL detector is significantly better than those of the representative conventional machine learning methods in terms of up-call detection rate, non-up-call detection rate, and false alarm rate.
ORCA-SPOT: An Automatic Killer Whale Sound Detection Toolkit Using Deep Learning
Scientific Reports, 2019
Large bioacoustic archives of wild animals are an important source to identify reappearing communication patterns, which can then be related to recurring behavioral patterns to advance the current understanding of intra-specific communication of non-human animals. A main challenge remains that most large-scale bioacoustic archives contain only a small percentage of animal vocalizations and a large amount of environmental noise, which makes it extremely difficult to manually retrieve sufficient vocalizations for further analysis – particularly important for species with advanced social systems and complex vocalizations. In this study deep neural networks were trained on 11,509 killer whale (Orcinus orca) signals and 34,848 noise segments. The resulting toolkit ORCA-SPOT was tested on a large-scale bioacoustic repository – the Orchive – comprising roughly 19,000 hours of killer whale underwater recordings. An automated segmentation of the entire Orchive recordings (about 2.2 years) to...
Classification of marine acoustic signals using Wavelets & Neural Networks
2008
We describe a method to automatically classify Humpback whale (Megaptera Novaeangliae) song that offers improvements over matched spectrogram techniques currently widely employed. Humpback song is a useful training example for a range of ocean acoustic transient detection and classification problems because it consists of units of varying length, frequency range and type, from nearly tonal to highly transient. With any recognition system it is vital that the data is first segmented into appropriate units. This is nontrivial and often implemented manually. We have developed a segmentation using wavelet packet decompositions that also produces a 'feature vector ' with which to classify the data using a neural network. The next step is to select the network architecture, where there are many good alternatives, including a principle component front end coupled to a back-propagation network and self-organising networks with Learning Vector Quantisation. Various architectures ty...
Classification of marine acoustic signals using wavelets & neural networks
2003
We describe a method to automatically classify Humpback whale (Megaptera Novaeangliae) song that offers improvements over matched spectrogram techniques currently widely employed. Humpback song is a useful training example for a range of ocean acoustic transient detection and classification problems because it consists of units of varying length, frequency range and type, from nearly tonal to highly transient. With any recognition system it is vital that the data is first segmented into appropriate units. This is nontrivial and often implemented manually. We have developed a segmentation using wavelet packet decompositions that also produces a 'feature vector' with which to classify the data using a neural network. The next step is to select the network architecture, where there are many good alternatives, including a principle component front end coupled to a back-propagation network and self-organising networks with Learning Vector Quantisation. Various architectures typically achieve 80% classification rates on a challenging variety of units. The approach has the added benefits of being shift invariant with respect to time, and somewhat tolerant of frequency and time stretching. Since the methods employed are not specific to whale song the approach can be usefully applied to other types of marine transient signals with minimum modification.
Machine Learning and Knowledge Discovery in Databases, 2020
Research into automated systems for detecting and classifying marine mammals in acoustic recordings is expanding internationally due to the necessity to analyze large collections of data for conservation purposes. In this work, we present a Convolutional Neural Network that is capable of classifying the vocalizations of three species of whales, nonbiological sources of noise, and a fifth class pertaining to ambient noise. In this way, the classifier is capable of detecting the presence and absence of whale vocalizations in an acoustic recording. Through transfer learning, we show that the classifier is capable of learning high-level representations and can generalize to additional species. We also propose a novel representation of acoustic signals that builds upon the commonly used spectrogram representation by way of interpolating and stacking multiple spectrograms produced using different Short-time Fourier Transform (STFT) parameters. The proposed representation is particularly effective for the task of marine mammal species classification where the acoustic events we are attempting to classify are sensitive to the parameters of the STFT.
ArXiv, 2016
Overarching goals for this work aim to advance the state of the art for detection, classification and localization (DCL) in the field of bioacoustics. This goal is primarily achieved by building a generic framework for detection-classification (DC) using a fast, efficient and scalable architecture, demonstrating the capabilities of this system using on a variety of low-frequency mid-frequency cetacean sounds. Two primary goals are to develop transferable technologies for detection and classification in, one: the area of advanced algorithms, such as deep learning and other methods; and two: advanced systems, capable of real-time and archival processing. For each key area, we will focus on producing publications from this work and providing tools and software to the community where/when possible. Currently massive amounts of acoustic data are being collected by various institutions, corporations and national defense agencies. The long-term goal is to provide technical capability to an...
OCEANS 2016 MTS/IEEE Monterey, 2016
a novel approach has been developed for detecting and classifying foraging calls of two mysticete species in passive acoustic recordings. This automated detector/classifier applies a computer-vision based technique, a pattern recognition method, to detect the foraging calls and remove ambient noise effects. The detected calls were then classified as blue whale D-calls [1] or fin whale 40-Hz calls [2] using a logistic regression classifier, a machine learning technique. The detector/classifier has been trained using the 2015 Detection, Classification, Localization and Density Estimation (DCLDE 2015, Scripps Institution of Oceanography UCSD [3]) low-frequency annotated set of passive acoustic data, collected in the Southern California Bight, and its out-of-sample performance was estimated by using a crossvalidation technique. The DCLDE 2015 scoring tool was used to estimate the detector/classifier performance in a standardized way. The pattern recognition algorithm's out-of-sample performance was scored as 96.68% recall with 92.03 % precision. The machine learning algorithm's out-of-sample prediction accuracy was 95.20%. The result indicated the potential of this detector/classifier on real-time passive acoustic marine mammal monitoring and bioacoustics signal processing.
Detecting Endangered Baleen Whales within Acoustic Recordings using R-CNNs
2019
Research and development into automated systems that can detect the vocalizations of endangered species of whales within acoustic recordings is a difficult yet important task. Over the past several years, hundreds of deceased whales have washed ashore along the coasts of North America. In many cases the primary cause of death of these species has been directly linked to human activity including vessel collisions and entanglement in fishing gear. In this work, we introduce preliminary work towards developing an end-to-end detection system using a Region-based Convolutional Neural Network (R-CNN) trained on spectrogram representations of acoustic recordings and labelled bounding boxes around the vocalizations of three species of endangered baleen whales: blue, fin, and sei whales. In this way, the R-CNN can detect vocalizations in terms of both time and frequency against a background of ambient noise and other non-biological sources. The R-CNN can be used by stakeholders and policy ma...