An Enhanced Deep Convolutional Neural Network for Classifying Indian Classical Dance Forms (original) (raw)

Indian Classical Dance Action Identification and Classification with Convolutional Neural Networks

Advances in Multimedia, 2018

Extracting and recognizing complex human movements from unconstrained online/offline video sequence is a challenging task in computer vision. This paper proposes the classification of Indian classical dance actions using a powerful artificial intelligence tool: convolutional neural networks (CNN). In this work, human action recognition on Indian classical dance videos is performed on recordings from both offline (controlled recording) and online (live performances, YouTube) data. The offline data is created with ten different subjects performing 200 familiar dance mudras/poses from different Indian classical dance forms under various background environments. The online dance data is collected from YouTube for ten different subjects. Each dance pose is occupied for 60 frames or images in a video in both the cases. CNN training is performed with 8 different sample sizes, each consisting of multiple sets of subjects. The remaining 2 samples are used for testing the trained CNN. Differe...

Classification Of Asmyukta Mudras In Indian Classical Dance Using Handcrafted And Pre-Trained Features With Machine Learning And Deep Learning Methods

International Journal of Computer Science and Information Security (IJCSIS), Vol. 22, No. 4, July-August, 2024

To attain prominent outcomes for the effective classification of mudras in Indian classical dance, there is a need for a more sophisticated classification model to evolve with the aid of Artificial Intelligence (AI), which makes these cultural forms more interesting and widely accepted globally. We proposed an automated hybrid classification model for Asmyukta Mudra by extracting prominent features from the dataset. The collected dataset initially underwent preprocessing, followed by UNET based segmentation to extract the features of the mudras. We employed two methods for feature extraction: handcrafted features (HuMoments) to preserve the details of dance gestures, and pre-trained models (VGG16, VGG19, and Dense Neural Network(DNN) to automatically extract relevant features from the images. An extra tree classifier was utilized to reduce the dimensionality of the features extracted by these methods to improve performance. A one-dimensional hybrid feature vector was created by the efficient fusion of these two types of features. This feature vector is then applied to three different combinations of classification models, namely model1, model2, and model3 for classification. Comparing the performance of the above three classification models, the maximum classification accuracy was obtained with the fused hybrid feature vector and the VGG19 (95%) model. The other two models also provided comparatively better results for VGG16 (93%) and the DNN (90%). These proposed models achieved better classification accuracy than the existing models in the literature. The results obtained from these models emphasize the need for fusion-based hybrid feature classification models, which effectively improve the classification efficacy of traditional dance mudras.

AN AUTOMATIC DETECTION OF FUNDAMENTAL POSTURES IN VIETNAMESE TRADITIONAL DANCES

Preserving and promoting the intangible cultural heritage is one of the essential problems of interest. In addition, the cultural heritage of the world has been accumulated and early respected during the development of human society. For preservation of traditional dances, this paper is one of the significant processed steps in our research sequence to build an intelligent storage repository that would help to manage the large-scale heterogeneous digital contents efficiently, particularly in dance domain. We concentrated on classifying the fundamental movements of Vietnamese Traditional Dances (VTDs), which are the foundations of automatically detecting the motions of the dancer's body parts. Moreover, we also propose a framework to classify basic movements through coupling a sequential aggregation of the Deep-CNN architectures (to extract the features) and Support Vector Machine (to classify the movements). In this study, we detect and extract automatically the primary movements of VTDs, we then store the extracted concepts into an ontology that serves for reasoning, query-answering, and searching dance videos.

Posture and sequence recognition for Bharatanatyam dance performances using machine learning approach

ArXiv, 2019

Understanding the underlying semantics of performing arts like dance is a challenging task. Dance is multimedia in nature and spans over time as well as space. Capturing and analyzing the multimedia content of the dance is useful for the preservation of cultural heritage, to build video recommendation systems, to assist learners to use tutoring systems. To develop an application for dance, three aspects of dance analysis need to be addressed: 1) Segmentation of the dance video to find the representative action elements, 2) Matching or recognition of the detected action elements, and 3) Recognition of the dance sequences formed by combining a number of action elements under certain rules. This paper attempts to solve three fundamental problems of dance analysis for understanding the underlying semantics of dance forms. Our focus is on an Indian Classical Dance (ICD) form known as Bharatanatyam. As dance is driven by music, we use the music as well as motion information for key postur...