Babu Anto | Kannur University (original) (raw)

Papers by Babu Anto

Research paper thumbnail of Speaker Independent Emotion Recognition Using Functional Link Network Classifier

This paper deals with a novel approach towards detecting emotions from Malayalam speech. We used ... more This paper deals with a novel approach towards detecting emotions from Malayalam speech. We used Discrete Wavelet Transforms (DWT) for feature extraction and Functional Link Network (FLN) Classifier for recognizing different emotions. From this experiment, the machine can recognize four different emotions such as neutral, happy, sad and anger with an overall recognition accuracy of 63.75%.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Automatic Emotion Classification of Malayalam Speech Using Artificial Neural Networks

This paper deals with a novel approach towards Automatic Emotion Classification from human uttera... more This paper deals with a novel approach towards Automatic Emotion Classification from human utterances. Discrete Wavelet Transform (DWT) is used for feature extraction from speech signals. Malayalam (One of the south Indian languages) is used for the experiment. We have used an elicited dataset of 500 utterances recorded from 10 male and 8 female speakers. Using Artificial Neural Network we have classified the four emotional classes such as neutral, happy, sad and anger correctly. A classification accuracy of 70% is obtained from this work

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Automatic Stress Detection from Speech by Using Support Vector Machines and Discrete Wavelet Transforms

Automatic Speech Recognition (ASR) and Automatic Emotion Recognition (AER) from speech are the pi... more Automatic Speech Recognition (ASR) and Automatic Emotion Recognition (AER) from speech are the pivotal areas in affective computing. Automatic detection of stress from speech simply means to make machines able to recognize the expressed stress from speech. We have used Discrete Wavelet Transform (DWT) technique for feature extraction and Support Vector Machines (SVM) for the training and testing of the machine. We have created and used a speaker and gender independent dataset consisting of 450 utterances for this experiment. We have used Malayalam (One of the south Indian language) for the experiment. We have used a dataset proportion 80:20 for training and testing of SVM respectively. We have obtained an overall stress detection accuracy of 89.95 % from this experiment.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speaker Independent Automatic Emotion from Speech:-A Comparison of MFCCs and Discrete Wavelet Transforms

Automatic Emotion Recognition (AER) from speech is one of the most interested research domains fo... more Automatic Emotion Recognition (AER) from speech is one of the most interested research domains for the scientific world.AER simply means to make a machine able to recognize the different emotions from speech. We have created and analyzed an elicited database consisting of 700 utterances under four different emotional classes such as neutral happy sad and anger. Malayalam (One of the south Indian languages) was used to conduct the experiment. We have used a database proportion of 80:20 for the training and testing purpose. We have analyzed the emotional speech corpus recorded by using both Discrete Wavelet Transforms (DWTs) and Mel Frequency Cepstral Coefficients (MFCCs) and obtained an overall recognition accuracy of 68.5% and 55% respectively. Artificial Neural Network was used for the classification and recognition purpose.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Text-Dependent Speaker Recognition Using Emotional Features and Neural Networks

This paper deals with a novel feature extraction method for text dependent speaker recognition. F... more This paper deals with a novel feature extraction method for text dependent speaker recognition. Four female speakers were used to create a text –dependent database for Malayalam (one of the south Indian languages). Discrete Wavelet Transform was used for feature extraction and artificial neural network was used for machine intelligence. In this work we used emotional features for speaker recognition. Multi Layer perceptron architecture was used for the machine learning. An overall recognition accuracy of 84.37% has been achieved from this experiment

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speech Emotion Recognition for Single User Interfaces

In this paper a new approach for auditory emotion recognition is presented. Within this work we f... more In this paper a new approach for auditory emotion recognition is presented. Within this work we focused on the speaker-dependent emotion recognition framework. Malayalam (one of the south Indian languages) is used to create the database. The database used is created with the help of a female speaker, having twenty one years old. Discrete Wavelet Transform is used for emotional feature generation and Artificial Neural Network for machine learning. Four classes were categorized, namely neutral, happy, sad and anger. An overall recognition accuracy of 89% has been achieved from this experiment

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Emotion Recognition From Malayalam Words Using Artificial Neural Networks

This paper deals with a novel feature extraction method based on Linear Predictive Coefficients (... more This paper deals with a novel feature extraction method based on Linear Predictive Coefficients (LPC) and Mel Frequency Cepstral Coefficients (MFCC) for the emotion recognition from speech. Classification and recognition of feature is done by using Artificial Neural Network. Malayalam (one of the south Indian languages) words were used for the experiment. One hundred and twenty samples are collected, categorized, labeled and stored in a database. By analyzing the results from the experiment the system can understand the different emotions. A recognition accuracy of 79% is achieved from the experiment.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of AUTOMATIC SPEECH EMOTION RECOGNITION: A COMPARISON OF DIFFERENT NEURAL NETWORK ALGORITHMS

This paper narrates the comparison studies of three different Artificial Neural Network algorithm... more This paper narrates the comparison studies of three different Artificial Neural Network algorithms for automatic emotion recognition from speech. We have compared the MLP, FLN and PLN architecture for this study and obtained an overall recognition accuracies of70%, 63.75%and 60% respectively for recognizing four different emotions such as neutral, happy, sad and anger. We have used Discrete Wavelet Transform (DWT) technique for feature extraction purpose. We have created and analyzed a database consisting of 580 utterances for this work by using Malayalam (one of the south Indian languages)

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Automatic Emotion Recognition from Spoken Words by Using Wavelet Transforms and Multi Layer Perceptrons

Automatic Emotion Recognition (AER) from speech attained greater significance in recent years. By... more Automatic Emotion Recognition (AER) from speech attained greater significance in recent years. By exactly identifying the emotional contents from speech, will help us in making better Human Computer Interfaces (HCIs). An elicited database of 580 utterances from Malayalam (one of the south Indian languages) is used for the experiment. Discrete Wavelet Transform (DWT) is performed for parametric representation of speech signals. Multi Layer Perceptron (MLP) architecture is used for training and testing of the neural network. From our work, machine can understand the four different emotional categories such as neutral, happy, sad and anger, with an overall recognition accuracy of 65%

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Gender Dependent Automatic Emotion Recognition from Female Speech by Using Daubechies Wavelets and Artificial Neural Networks

We have created and analyzed an elicited emotional database consisting of 340 emotional speech sa... more We have created and analyzed an elicited emotional database consisting of 340 emotional speech samples under four different emotions neutral, happy, sad and anger. Malayalam (one of the south Indian languages) was used for the experiment. Daubechies8 wavelet was used fo� feature extraction and artificial neural network was used for pattern recognition. An overall recognition accuracy of 72.055% obtained from this experiment.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Automatic Stress Detection from Speech by Using Discrete Wavelet Transforms

This paper deals with automatic recognition of stress from spoken words in Malayalam language. Au... more This paper deals with automatic recognition of stress from spoken words in Malayalam language. Automatic Stress Recognition from speech is one of the most interesting areas in speech and emotion related studies and applications. Automatic recognition of stress from speech finds applications mostly in affective computing. Stress detection from speech means to understand the exact stress level from human speech by using a machine with the help of some machine learning algorithms. We have created and evaluated an elicited mode database consisting of a total number of four hundred isolated spoken words. Discrete Wavelet Transform (DWT) is used for the feature extraction and Artificial Neural Network (ANN) is used for the training and testing phase of the machine learning. We have obtained an overall recognition accuracy of 85% from this experiment

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Age Recognition of Children by Using Neural Networks

Automatic recognition of the age of speakers is done in this experiment by using a hybrid novel a... more Automatic recognition of the age of speakers is done in this experiment by using a hybrid novel approach of Discrete Wavelet Transforms (DWT) and Artificial Neural Network(ANN)s. Malayalam (one of the south Indian language) was used for the experiment. A speech database consisting of 200 speech sample is created and analyzed in this work. We have used school children under the age group of 12 yrs for the experiment. An overall age recognition accuracy of 75% has been achieved from this experiment

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speaker and Text Dependent Automatic Emotion Recognition from Female Speech by using Artificial Neural Networks

World Congress on Nature & Biologically Inspired Computing, 2009

We have created and analyzed an elicited emotional database consisting of 340 emotional speech sa... more We have created and analyzed an elicited emotional database consisting of 340 emotional speech samples under four different emotions neutral, happy, sad and anger. Malayalam (one of the south Indian languages) was used for the experiment. Daubechies8 wavelet was used for feature extraction and artificial neural network was used for pattern recognition. An overall recognition accuracy of 72.055% obtained from

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Spoken Digit Compression: A Comparative Study Between Discrete Wavelet Transforms And Linear Predictive Coding

International Journal of Computer Applications, 2010

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speaker Independent Automatic Emotion Recognition from Speech: A Comparison of MFCCs and Discrete Wavelet Transforms

2009 International Conference on Advances in Recent Technologies in Communication and Computing, 2009

Automatic Emotion Recognition (AER) from speech is one of the most interested research domains fo... more Automatic Emotion Recognition (AER) from speech is one of the most interested research domains for the scientific world. AER simply means to make a machine able to recognize the different emotions from speech. We have created and analyzed an elicited database consisting of 700 utterances under four different emotional classes such as neutral happy sad and anger. Malayalam (One of

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speaker Independent Emotion Recognition Using Functional Link Network Classifier

This paper deals with a novel approach towards detecting emotions from Malayalam speech. We used ... more This paper deals with a novel approach towards detecting emotions from Malayalam speech. We used Discrete Wavelet Transforms (DWT) for feature extraction and Functional Link Network (FLN) Classifier for recognizing different emotions. From this experiment, the machine can recognize four different emotions such as neutral, happy, sad and anger with an overall recognition accuracy of 63.75%.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Automatic Emotion Classification of Malayalam Speech Using Artificial Neural Networks

This paper deals with a novel approach towards Automatic Emotion Classification from human uttera... more This paper deals with a novel approach towards Automatic Emotion Classification from human utterances. Discrete Wavelet Transform (DWT) is used for feature extraction from speech signals. Malayalam (One of the south Indian languages) is used for the experiment. We have used an elicited dataset of 500 utterances recorded from 10 male and 8 female speakers. Using Artificial Neural Network we have classified the four emotional classes such as neutral, happy, sad and anger correctly. A classification accuracy of 70% is obtained from this work

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Automatic Stress Detection from Speech by Using Support Vector Machines and Discrete Wavelet Transforms

Automatic Speech Recognition (ASR) and Automatic Emotion Recognition (AER) from speech are the pi... more Automatic Speech Recognition (ASR) and Automatic Emotion Recognition (AER) from speech are the pivotal areas in affective computing. Automatic detection of stress from speech simply means to make machines able to recognize the expressed stress from speech. We have used Discrete Wavelet Transform (DWT) technique for feature extraction and Support Vector Machines (SVM) for the training and testing of the machine. We have created and used a speaker and gender independent dataset consisting of 450 utterances for this experiment. We have used Malayalam (One of the south Indian language) for the experiment. We have used a dataset proportion 80:20 for training and testing of SVM respectively. We have obtained an overall stress detection accuracy of 89.95 % from this experiment.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speaker Independent Automatic Emotion from Speech:-A Comparison of MFCCs and Discrete Wavelet Transforms

Automatic Emotion Recognition (AER) from speech is one of the most interested research domains fo... more Automatic Emotion Recognition (AER) from speech is one of the most interested research domains for the scientific world.AER simply means to make a machine able to recognize the different emotions from speech. We have created and analyzed an elicited database consisting of 700 utterances under four different emotional classes such as neutral happy sad and anger. Malayalam (One of the south Indian languages) was used to conduct the experiment. We have used a database proportion of 80:20 for the training and testing purpose. We have analyzed the emotional speech corpus recorded by using both Discrete Wavelet Transforms (DWTs) and Mel Frequency Cepstral Coefficients (MFCCs) and obtained an overall recognition accuracy of 68.5% and 55% respectively. Artificial Neural Network was used for the classification and recognition purpose.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Text-Dependent Speaker Recognition Using Emotional Features and Neural Networks

This paper deals with a novel feature extraction method for text dependent speaker recognition. F... more This paper deals with a novel feature extraction method for text dependent speaker recognition. Four female speakers were used to create a text –dependent database for Malayalam (one of the south Indian languages). Discrete Wavelet Transform was used for feature extraction and artificial neural network was used for machine intelligence. In this work we used emotional features for speaker recognition. Multi Layer perceptron architecture was used for the machine learning. An overall recognition accuracy of 84.37% has been achieved from this experiment

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speech Emotion Recognition for Single User Interfaces

In this paper a new approach for auditory emotion recognition is presented. Within this work we f... more In this paper a new approach for auditory emotion recognition is presented. Within this work we focused on the speaker-dependent emotion recognition framework. Malayalam (one of the south Indian languages) is used to create the database. The database used is created with the help of a female speaker, having twenty one years old. Discrete Wavelet Transform is used for emotional feature generation and Artificial Neural Network for machine learning. Four classes were categorized, namely neutral, happy, sad and anger. An overall recognition accuracy of 89% has been achieved from this experiment

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Emotion Recognition From Malayalam Words Using Artificial Neural Networks

This paper deals with a novel feature extraction method based on Linear Predictive Coefficients (... more This paper deals with a novel feature extraction method based on Linear Predictive Coefficients (LPC) and Mel Frequency Cepstral Coefficients (MFCC) for the emotion recognition from speech. Classification and recognition of feature is done by using Artificial Neural Network. Malayalam (one of the south Indian languages) words were used for the experiment. One hundred and twenty samples are collected, categorized, labeled and stored in a database. By analyzing the results from the experiment the system can understand the different emotions. A recognition accuracy of 79% is achieved from the experiment.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of AUTOMATIC SPEECH EMOTION RECOGNITION: A COMPARISON OF DIFFERENT NEURAL NETWORK ALGORITHMS

This paper narrates the comparison studies of three different Artificial Neural Network algorithm... more This paper narrates the comparison studies of three different Artificial Neural Network algorithms for automatic emotion recognition from speech. We have compared the MLP, FLN and PLN architecture for this study and obtained an overall recognition accuracies of70%, 63.75%and 60% respectively for recognizing four different emotions such as neutral, happy, sad and anger. We have used Discrete Wavelet Transform (DWT) technique for feature extraction purpose. We have created and analyzed a database consisting of 580 utterances for this work by using Malayalam (one of the south Indian languages)

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Automatic Emotion Recognition from Spoken Words by Using Wavelet Transforms and Multi Layer Perceptrons

Automatic Emotion Recognition (AER) from speech attained greater significance in recent years. By... more Automatic Emotion Recognition (AER) from speech attained greater significance in recent years. By exactly identifying the emotional contents from speech, will help us in making better Human Computer Interfaces (HCIs). An elicited database of 580 utterances from Malayalam (one of the south Indian languages) is used for the experiment. Discrete Wavelet Transform (DWT) is performed for parametric representation of speech signals. Multi Layer Perceptron (MLP) architecture is used for training and testing of the neural network. From our work, machine can understand the four different emotional categories such as neutral, happy, sad and anger, with an overall recognition accuracy of 65%

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Gender Dependent Automatic Emotion Recognition from Female Speech by Using Daubechies Wavelets and Artificial Neural Networks

We have created and analyzed an elicited emotional database consisting of 340 emotional speech sa... more We have created and analyzed an elicited emotional database consisting of 340 emotional speech samples under four different emotions neutral, happy, sad and anger. Malayalam (one of the south Indian languages) was used for the experiment. Daubechies8 wavelet was used fo� feature extraction and artificial neural network was used for pattern recognition. An overall recognition accuracy of 72.055% obtained from this experiment.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Automatic Stress Detection from Speech by Using Discrete Wavelet Transforms

This paper deals with automatic recognition of stress from spoken words in Malayalam language. Au... more This paper deals with automatic recognition of stress from spoken words in Malayalam language. Automatic Stress Recognition from speech is one of the most interesting areas in speech and emotion related studies and applications. Automatic recognition of stress from speech finds applications mostly in affective computing. Stress detection from speech means to understand the exact stress level from human speech by using a machine with the help of some machine learning algorithms. We have created and evaluated an elicited mode database consisting of a total number of four hundred isolated spoken words. Discrete Wavelet Transform (DWT) is used for the feature extraction and Artificial Neural Network (ANN) is used for the training and testing phase of the machine learning. We have obtained an overall recognition accuracy of 85% from this experiment

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Age Recognition of Children by Using Neural Networks

Automatic recognition of the age of speakers is done in this experiment by using a hybrid novel a... more Automatic recognition of the age of speakers is done in this experiment by using a hybrid novel approach of Discrete Wavelet Transforms (DWT) and Artificial Neural Network(ANN)s. Malayalam (one of the south Indian language) was used for the experiment. A speech database consisting of 200 speech sample is created and analyzed in this work. We have used school children under the age group of 12 yrs for the experiment. An overall age recognition accuracy of 75% has been achieved from this experiment

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speaker and Text Dependent Automatic Emotion Recognition from Female Speech by using Artificial Neural Networks

World Congress on Nature & Biologically Inspired Computing, 2009

We have created and analyzed an elicited emotional database consisting of 340 emotional speech sa... more We have created and analyzed an elicited emotional database consisting of 340 emotional speech samples under four different emotions neutral, happy, sad and anger. Malayalam (one of the south Indian languages) was used for the experiment. Daubechies8 wavelet was used for feature extraction and artificial neural network was used for pattern recognition. An overall recognition accuracy of 72.055% obtained from

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Spoken Digit Compression: A Comparative Study Between Discrete Wavelet Transforms And Linear Predictive Coding

International Journal of Computer Applications, 2010

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Speaker Independent Automatic Emotion Recognition from Speech: A Comparison of MFCCs and Discrete Wavelet Transforms

2009 International Conference on Advances in Recent Technologies in Communication and Computing, 2009

Automatic Emotion Recognition (AER) from speech is one of the most interested research domains fo... more Automatic Emotion Recognition (AER) from speech is one of the most interested research domains for the scientific world. AER simply means to make a machine able to recognize the different emotions from speech. We have created and analyzed an elicited database consisting of 700 utterances under four different emotional classes such as neutral happy sad and anger. Malayalam (One of

Bookmarks Related papers MentionsView impact