Speaker Identification using Spectrograms of Varying Frame Sizes (original) (raw)
Related papers
Speaker Recognition from Spectrogram Images
2021 IEEE International Conference on Smart Information Systems and Technologies (SIST), 2021
Speaker identification is used to identify the owner of the voice among many people based on the uniqueness of everyone’s speech style. In this paper, we combine Convolutional Neural Network with Recurrent Neural Network using Long Short-Term Memory models for speaker recognition and implement the deep learning architecture on our dataset of spectrogram images for 77 different non-native speakers reading the same texts in Turkish. Usage of identical text reading eliminates the possible variations and diversities on spectrograms depending on vocabularies. Experiments show that the used method is very effective on recognition rate with satisfying performance and over 98% accuracy.
International Journal of Computer Applications, 2010
This paper aims to provide different approaches to text dependent speaker identification using various transformation techniques such as DCT, Walsh and Haar transform along with use of spectrograms. Set of spectrograms obtained from speech samples is used as image database for the study undertaken. This image database is then subjected to various transforms. Using Euclidean distance as measure of similarity, most appropriate speaker match is obtained which is declared to be identified speaker. Each transform is applied to spectrograms in two different ways: on full image and on Row Mean of an image. In both the ways, effect of different number of coefficients of transformed image is observed. Further, comparison of all three transformation techniques on spectrograms in both the ways shows that numbers of mathematical computations required for Walsh transform is much lesser than number of mathematical computations required in case of DCT on spectrograms. Whereas, use of Haar transform on spectrograms drastically reduces the number of mathematical computation with almost equal identification rate. Transformation techniques on Row Mean give better identification rate than transformation technique on full image.
2011
In this paper, we propose Speaker Identification using two transforms, namely Haar Transform and Kekre's Transform. The speech signal spoken by a particular speaker is converted into a spectrogram by using 25% and 50% overlap between consecutive sample vectors. The two transforms are applied on the spectrogram. The row mean of the transformed matrix forms the feature vector, which is used in the training as well as matching phases. The results of both the transform techniques have been compared. Haar transform gives fairly good results with a maximum accuracy of 69% for both 25% as well as 50% overlap. Kekre's Transform shows much better performance, with a maximum accuracy of 85.7% for 25% overlap and 88.5% accuracy for 50% overlap.
Speaker Identification Using 2-D DCT, Walsh and Haar on Full and Block Spectrogram
This paper aims to provide different approaches to text dependent speaker identification using DCT, Walsh and Haar transform along with use of spectrograms. Spectrograms obtained from speech samples are used as image database for the study undertaken. This image database is then subjected to various transforms. Using Euclidean distance as measure of similarity, most appropriate speaker match is obtained and is declared as identified speaker. Each transform is applied to spectrograms in two different ways: on full image and on image blocks. In both the ways, effect of different number of coefficients of transformed image is observed. Haar transform on full image reduces multiplications required by DCT and Walsh by 28 times whereas applying Haar transform on image blocks requires 18 times less mathematical computations as compared to DCT and Walsh on image blocks. Transforms when applied to image blocks, yield better or equal identification rates with reduced computational complexity.
Dimension reduction of the modulation spectrogram for speaker verification
2008
A so-called modulation spectrogram is obtained from the conventional speech spectrogram by short-term spectral analysis along the temporal trajectories of the frequency bins. In its original definition, the modulation spectrogram is a highdimensional representation and it is not clear how to extract features from it. In this paper, we define a low-dimensional feature which captures the shape of the modulation spectra. The recognition accuracy of the modulation spectrogram based classifier is improved from our previous result of EER=25.1% to EER=17.4% on the NIST 2001 speaker recognition task.
Robust Spectral Features for Automatic Speaker Recognition in Mismatch Condition
Procedia Computer Science, 2015
The widespread use of automatic speaker recognition technology in real world applications demands for robustness against various realistic conditions. In this paper, a robust spectral feature set, called NDSF (Normalized Dynamic Spectral Features) is proposed for automatic speaker recognition in mismatch condition. Magnitude spectral subtraction is performed on spectral features for compensation against additive noise. A spectral domain modification is further performed using time-difference approach followed by Gaussianization Non-linearity. Histogram normalization is applied to these dynamic spectral features, to compensate the effect of channel mismatch and some non-linear effects introduced due to handset transducers. Feature extraction using proposed features is carried out for a text independent automatic speaker recognition (identification) system. The performance of proposed feature set is compared with conventional cepstral features like (mel-frequency cepstral coefficients and linear prediction cepstral coefficients), for acoustic mismatch condition caused by use of different sensors. Studies are performed on two databases: A multi-variability speaker recognition (MVSR) developed by IIT-Guwahati and Multi-speaker continuous (Hindi) speech database (By Department of Information Technology, Government of India). From experimental analysis, it is observed that, spectral domain dynamic features enhance the robustness by reducing additive noise and channel effects caused by sensor mismatch. The proposed NDSF features are found to be more robust than cepstral features for both datasets.
A further investigation on speech features for speaker characterization
6th International Conference on Spoken Language Processing (ICSLP 2000)
In this article, we investigate on alternative speech features for speaker characterization. We study Line Spectrum Pairs features, Time-Frequency Principal Components and Discriminant Components of the Spectrum. These alternative features are tested and compared on a task of speaker verification. This task consists in verifying a claimed identity from a speech segment. Systems are evaluated on a subset of the evaluation data of the NIST 1999 speaker recognition campaign. The new speech features are also compared to the classical cepstral coefficients, which remain, in our experiments, the best performing features.
Spectral features for automatic text-independent speaker recognition
… , Department of computer science, University of …, 2003
Front-end or feature extractor is the first component in an automatic speaker recognition system. Feature extraction transforms the raw speech signal into a compact but effective representation that is more stable and discriminative than the original signal. Since the front-end is the first component in the chain, the quality of the later components (speaker modeling and pattern matching) is strongly determined by the quality of the front-end. In other words, classification can be at most as accurate as the features. Several feature extraction methods have been proposed, and successfully exploited in the speaker recognition task. However, almost exclusively, the methods are adopted directly from the speech recognition task. This is somewhat ironical, considering the opposite nature of the two tasks. In speech recognition, speaker variability is one of the major error sources, whereas in speaker recognition it is the information that we wish to extract. The mel-frequency cepstral coefficients (MFCC) is the most evident example of a feature set that is extensively used in speaker recognition, but originally developed for speech recognition purposes. When MFCC front-end is used in speaker recognition system, one makes an implicit assumption that the human hearing meachanism is the optimal speaker recognizer. However, this has not been confirmed, and in fact opposite results exist. Although several methods adopted from speech recognition have shown to work well in practise, they are often used as "black boxes" with fixed parameters. It is not understood what kind of information the features capture from the speech signal. Understanding the features at some level requires experience from specific areas such as speech physiology, acoustic phonetics, digital signal processing and statistical pattern recognition. According to the author's general impression of literature, it seems more and more that currently, at the best we are guessing what is the code in the signal that carries our individuality. This thesis has two main purposes. On the one hand, we attempt to see the feature extraction as a whole, starting from understanding the speech production process, what is known about speaker individuality, and then going i
A Survey of Speaker Recognition: Fundamental Theories, Recognition Methods and Opportunities
IEEE Access
Humans can identify a speaker by listening to their voice, over the telephone, or on any digital devices. Acquiring this congenital human competency, authentication technologies based on voice biometrics, such as automatic speaker recognition (ASR), have been introduced. An ASR recognizes speakers by analyzing speech signals and characteristics extracted from speaker's voices. ASR has recently become an effective research area as an essential aspect of voice biometrics. Specifically, this literature survey gives a concise introduction to ASR and provides an overview of the general architectures dealing with speaker recognition technologies, and upholds the past, present, and future research trends in this area. This paper briefly describes all the main aspects of ASR, such as speaker identification, verification, diarization etc. Further, the performance of current speaker recognition systems are investigated in this survey with the limitations and possible ways of improvement. Finally, a few unsolved challenges of speaker recognition are presented at the closure of this survey. INDEX TERMS Automatic speaker recognition, feature extraction, recognition techniques, performance measures, challenges. 2) TEXT-DEPENDENT AND TEXT-INDEPENDENT RECOGNITION Text-dependency is another level of classification of speaker recognition (SR). This classification is based upon the
Speaker Identification using Frequency Dsitribution in the Transform Domain
2012
In this paper, we propose Speaker Identification using the frequency distribution of various transforms like DFT (Discrete Fourier Transform), DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), Hartley, Walsh, Haar and Kekre transforms. The speech signal spoken by a particular speaker is converted into frequency domain by applying the different transform techniques. The distribution in the transform domain is utilized to extract the feature vectors in the training and the matching phases. The results obtained by using all the seven transform techniques have been analyzed and compared. It can be seen that DFT, DCT, DST and Hartley transform give comparatively similar results (Above 96%). The results obtained by using Haar and Kekre transform are very poor. The best results are obtained by using DFT (97.19% for a feature vector of size 40).