Two-Phase Multimodal Neural Network for App Categorization using APK Resources (original) (raw)
2020 IEEE 14th International Conference on Semantic Computing (ICSC), 2020
Abstract
Following an exponential increase in the number of applications created every year, there are currently over 2.5 million apps in the Google Play Store. Consequently, there has been a sharp rise in the number of apps downloaded by users on their devices. However, limited research has been done on navigability, grouping, and searching of applications on these devices. Current methods of app classification require manual labelling or extracting information from the app store. Such methods are not only resource-intensive and time-consuming but are also not scalable. To overcome these issues, the authors propose a novel architecture for classification of applications into categories, utilizing only the information available in their application packages (APKs) - consequently removing any external dependency and making the entire process completely on-device. A multimodal deep learning approach is followed in a 2-phase training scheme by independently training neural models on distinct sets of information extracted from the APKs and assimilating and fine-tuning the learned weights to incorporate combined knowledge. Our experiments show significant improvement in the evaluation metrics for app classification and clustering over the set benchmarks. The proposed architecture enables a fully on-device solution for app categorization.
Vanraj Vala hasn't uploaded this paper.
Let Vanraj know you want this paper to be uploaded.
Ask for this paper to be uploaded.