Robust Training of Social Media Image Classification Models for Rapid Disaster Response (original) (raw)

Social Media Images Classification Models for Real-time Disaster Response

2021

Images shared on social media help crisis managers in terms of gaining situational awareness and assessing incurred damages, among other response tasks. As the volume and velocity of such content are really high, therefore, real-time image classification became an urgent need in order to take a faster response. Recent advances in computer vision and deep neural networks have enabled the development of models for real-time image classification for a number of tasks, including detecting crisis incidents, filtering irrelevant images, classifying images into specific humanitarian categories, and assessing the severity of the damage. For developing real-time robust models, it is necessary to understand the capability of the publicly available pretrained models for these tasks. In the current state-of-art of crisis informatics, it is under-explored. In this study, we address such limitations. We investigate ten different architectures for four different tasks using the largest publicly av...

Robust Training of Social Media Image Classification Models

IEEE Transactions on Computational Social Systems

Images shared on social media help crisis managers gain situational awareness and assess incurred damages, among other response tasks. As the volume and velocity of such content are typically high, real-time image classification has become an urgent need for a faster disaster response. Recent advances in computer vision and deep neural networks have enabled the development of models for real-time image classification for a number of tasks, including detecting crisis incidents, filtering irrelevant images, classifying images into specific humanitarian categories, and assessing the severity of the damage. To develop robust real-time models, it is necessary to understand the capability of the publicly available pre-trained models for these tasks, which remains to be under-explored in the crisis informatics literature. In this study, we address such limitations by investigating ten different network architectures for four different tasks using the largest publicly available datasets for these tasks. We also explore various data augmentation strategies, semi-supervised techniques, and a multitask learning setup. In our extensive experiments, we achieve promising results.

MEDIC: A Multi-Task Learning Dataset for Disaster Image Classification

ArXiv, 2021

Recent research in disaster informatics demonstrates a practical and important use case of artificial intelligence to save human lives and sufferings during natural disasters based on social media contents (text and images). While notable progress has been made using texts, research on exploiting the images remains relatively under-explored. To advance the image-based approach, we propose MEDIC1, which is the largest social media image classification dataset for humanitarian response consisting of 71,198 images to address four different tasks in a multitask learning setup. This is the first dataset of its kind: social media image, disaster response, and multi-task learning research. An important property of this dataset is its high potential to contribute research on multi-task learning, which recently receives much interest from the machine learning community and has shown remarkable results in terms of memory, inference speed, performance, and generalization capability. Therefore,...

Beyond Deep Learning: A Two-Stage Approach to Classifying Disaster Events and Needs

2024 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM)

Social media's real-time nature has transformed it into a critical tool for disaster response, and for that this study explores the use of tweets for classifying disaster types and identifying humanitarian needs in the aftermath of various disaster events. We compare traditional machine learning models like Random Forest and Support Vector Machines with the deep learning technique, BERT. While BERT demonstrates promising results, a key finding lies in the performance of the voting classifier ensemble, a combination of traditional models. This ensemble achieves accuracy comparable to BERT and even surpasses it. Furthermore, the ensemble boasts exceptional training and inference speeds, making it ideal for real-time applications in disaster response scenarios. Our work investigates the continued value of traditional machine learning methods. By "dusting off" these models we can achieve competitive performance while maintaining computational efficiency. Ultimately, this study empowers humanitarian organizations to leverage the power of text classification for extracting crucial insights from social media data, leading to more effective and targeted responses in times of crisis.

Analysis of Social Media Data using Multimodal Deep Learning for Disaster Response

2020

Multimedia content in social media platforms provides significant information during disaster events. The types of information shared include reports of injured or deceased people, infrastructure damage, and missing or found people, among others. Although many studies have shown the usefulness of both text and image content for disaster response purposes, the research has been mostly focused on analyzing only the text modality in the past. In this paper, we propose to use both text and image modalities of social media data to learn a joint representation using state-of-the-art deep learning techniques. Specifically, we utilize convolutional neural networks to define a multimodal deep learning architecture with a modality-agnostic shared representation. Extensive experiments on real-world disaster datasets show that the proposed multimodal architecture yields better performance than models trained using a single modality (e.g., either text or image).

Using a combination of human insights and 'deep learning' for real-time disaster communication

Progress in Disaster Science, 2019

Using social media during natural disasters has become commonplace globally. In the U.S., public social media platforms are often a go-to because people believe: the 9-1-1 system becomes overloaded during emergencies and that first responders will see their posts. While social media requests may help save lives, these posts are difficult to find because there is more noise on public social media than clear signals of who needs help. This study compares human-coded images posted during 2017's Hurricane Harvey to machine-learned 'deep learning' classification methods. Our framework for feature extraction uses the VGG-16 convolutional neural network/multilayer perceptron classifiers for classifying the urgency and time period for a given image. We find that our qualitative results showcase that unique disaster experiences are not always captured through machine-learned methods. These methods work together to parse through the high levels of non-relevant content on social media to find relevant content and requests.

HumAID: Human-Annotated Disaster Incidents Data from Twitter with Deep Learning Benchmarks

2021

Social networks are widely used for information consumption and dissemination, especially during time-critical events such as natural disasters. Despite its significantly large volume, social media content is often too noisy for direct use in any application. Therefore, it is important to filter, categorize, and concisely summarize the available content to facilitate effective consumption and decision-making. To address such issues automatic classification systems have been developed using supervised modeling approaches, thanks to the earlier efforts on creating labeled datasets. However, existing datasets are limited in different aspects (e.g., size, contains duplicates) and less suitable to support more advanced and data-hungry deep learning models. In this paper, we present a new large-scale dataset with ∼77K human-labeled tweets, sampled from a pool of ∼24 million tweets across 19 disaster events that happened between 2016 and 2019. Moreover, we propose a data collection and sam...

DisasterNet: Evaluating the Performance of Transfer Learning to Classify Hurricane-Related Images Posted on Twitter

Proceedings of the 53rd Hawaii International Conference on System Sciences

Social media platforms are increasingly used during disasters. In the U.S., victims consider these platforms to be reliable news sources and they believe first responders will see what they publicly post [1,2]. While having ways to request help during disasters might save lives, this information is difficult to find because non-relevant content on social media completely overshadows content reflective of who needs help. To resolve this issue, we develop a framework for classifying hurricane-related images that have been human-annotated. Our transfer learning framework classifies each image using the VGG-16 convolutional neural network and multi-layer perceptron classifiers according to the urgency, relevance, and time period, in addition to the presence of damage and relief motifs [3]. We find that our framework not only successfully functions as an accurate method for hurricane-related image classification, but also that real-time classification of social media images using a small training set is possible.

A disaster classification application using convolutional neural network by performing data augmentation

Indonesian Journal of Electrical Engineering and Computer Science

Natural disasters are catastrophic events and cause havoc to human life. These events occur in the most unpredictable times and are beyond human control. The aftermath of the disasters is devastating ranging from loss of life to relocation of large groups of the population. With the development in the domains of computer vision (CV) and Image processing, machine learning and deep learning models can integrate images and perform predictions. Deep learning techniques employ many robust techniques and provide significant results even in the case of images. The detection of natural disasters without human intervention requires the help of deep learning techniques. The project aims to employ a multi-layered convolutional neural network (CNN) organization to classify the images related to natural disasters related to earthquakes, floods, cyclones, and wildfires.