Bangla Image Caption Generation through CNN-Transformer based Encoder-Decoder Network (original) (raw)

Bornon: Bengali Image Captioning with Transformer-based Deep learning approach

ArXiv, 2021

Image captioning using Encoder-Decoder based approach where CNN is used as the Encoder and sequence generator like RNN as Decoder has proven to be very effective. However, this method has a drawback that is sequence needs to be processed in order. To overcome this drawback some researcher has utilized the Transformer model to generate captions from images using English datasets. However, none of them generated captions in Bengali using the transformer model. As a result, we utilized three different Bengali datasets to generate Bengali captions from images using the Transformer model. Additionally, we compared the performance of the transformer-based model with a visual attention-based Encoder-Decoder approach. Finally, we compared the result of the transformer-based model with other models that employed different Bengali image captioning datasets.

Automatic Bangla Image Captioning Based on Transformer Model in Deep Learning

International Journal of Advanced Computer Science and Applications (IJACSA), 2023

Indeed, Image Captioning has become a crucial aspect of contemporary artificial intelligence because it has tackled two crucial parts of the AI field: Computer Vision and Natural Language Processing. Currently, Bangla stands as the seventh most widely spoken language globally. Due to this, image captioning has gained recognition for its significant research accomplishments. Many established datasets are found in English but no standard datasets in Bangla. For our research, we have used the BAN-Cap dataset which contains 8091 images with 40455 sentences. Many effective encoder-decoder and Visual Attention approaches are used for image captioning where CNN is utilized for the encoder and RNN is used for the decoder. However, we suggested a transformer-based image captioning model in this study with different pre-train image feature extraction models like Resnet50, InceptionV3, and VGG16 using the BAN-Cap dataset and find out its effective efficiency and accuracy based on many performances measured methods like BLEU, METEOR, ROUGE, CIDEr and also find out the drawbacks of others model.

Image Captioning in Nepali Using CNN and Transformer Decoder

Journal of Engineering and Sciences, 2023

Image captioning has attracted huge attention from deep learning researchers. This approach combines image and textbased deep learning techniques to create the written descriptions of images automatically. There has been limited research on image captioning using the Nepali language, with most studies focusing on English datasets. Therefore, there are no publicly available datasets in the Nepali language. Most previous works are based on the RNN-CNN approach, which produces inferior results compared to image captioning using the Transformer model. Similarly, using the BLEU score as the only evaluation metric cannot justify the quality of the produced captions. To address this gap, in this research work, well known "Flickr8k" English data set is translated into Nepali language and then manually corrected to ensure accurate translations. The conventional Transformer is comprised of encoder and decoder modules. Both modules contain a multi-head attention mechanism. This makes the model complex and computationally expensive. Hence, we propose a noble approach where the encoder module of the Transformer is completely removed and only the decoder part of the Transformer is used, in conjunction with CNN, which acts as a feature extractor. The image features are extracted using the MobileNetV3 Large while the Transformer decoder processes these feature vectors and the input text sequence to generate appropriate captions. The system's effectiveness is measured using metrics, such as the BLEU and Meteor scores, to judge the caliber and precision of the generated captions.

CapNet: An Encoder-Decoder based Neural Network Model for Automatic Bangla Image Caption Generation

International Journal of Advanced Computer Science and Applications

Automatic caption generation from images has become an active research topic in the field of Computer Vision (CV) and Natural Language Processing (NLP). Machine generated image caption plays a vital role for the visually impaired people by converting the caption to speech to have a better understanding of their surrounding. Though significant amount of research has been conducted for automatic caption generation in other languages, far too little effort has been devoted to Bangla image caption generation. In this paper, we propose an encoder-decoder based model which takes an image as input and generates the corresponding Bangla caption as output. The encoder network consists of a pretrained image feature extractor called ResNet-50, while the decoder network consists of Bidirectional LSTMs for caption generation. The model has been trained and evaluated using a Bangla image captioning dataset named BanglaLekhaIm-ageCaptions. The proposed model achieved a training accuracy of 91% and BLEU-1, BLEU-2, BLEU-3, BLEU-4 scores of 0.81, 0.67, 0.57, and 0.51 respectively. Moreover, a comparative study for different pretrained feature extractors such as VGG-16 and Xception is presented. Finally, the proposed model has been deployed on an embedded device for analysing the inference time and power consumption.

Image to Bengali Caption Generation Using Deep CNN and Bidirectional Gated Recurrent Unit

2020 23rd International Conference on Computer and Information Technology (ICCIT), 2020

There is not more research on the linguistic characteristics of the Bengali language. Bengali is spoken by about 193 million people globally, and it is one of the top ten spoken languages worldwide. In this paper, a CNN and Bidirectional GRU architecture is proposed for producing a natural language caption from an image in the Bengali language. Bangladeshi people may use this study to grasp one another better and crack language barriers and increase their cultural understanding. This study would immensely help several blind people in their daily lives. The encoder-decoder approach was used in this paper for captioning. We used a pre-trained Deep CNN named InceptionV3 image encoder to interpret, identify, and annotate the dataset’s images and used a Bidirectional GRU architecture as the decoder to produce captions. In order to deliver the finest and subtle Bengali captions from our model, argmax search and beam search are included. We proposed a new dataset named BNATURE that contain...

TextMage: The Automated Bangla Caption Generator Based On Deep Learning

Neural Networks and Deep Learning have seen an upsurge of research in the past decade due to the improved results. Generates text from the given image is a crucial task that requires the combination of both sectors which are computer vision and natural language processing in order to understand an image and represent it using a natural language. However existing works have all been done on a particular lingual domain and on the same set of data. This leads to the systems being developed to perform poorly on images that belong to specific locales' geographical context. TextMage is a system that is capable of understanding visual scenes that belong to the Bangladeshi geographical context and use its knowledge to represent what it understands in Bengali. Hence, we have trained a model on our previously developed and published dataset named BanglaLekhaImageCaptions. This dataset contains 9,154 images along with two annotations for each image. In order to access performance, the prop...

EfficientNet-Transformer for image captioning in Bahasa

VII INTERNATIONAL CONFERENCE “SAFETY PROBLEMS OF CIVIL ENGINEERING CRITICAL INFRASTRUCTURES” (SPCECI2021)

The role of image captioning task that gives visual understanding of images enables semantic-based information retrieval. The research on image captioning always aims to produce better descriptions of images. There is only a little research found regarding image captioning in Bahasa Indonesia. All the studies use sequence-to-sequence (seq2seq) models alongside attention mechanisms. The model gives good results. However, there are crucial drawbacks. The seq2seq model gives poor performance when dealing with long sentences, while the attention mechanism consumes a lot of resources because it relies on Recurrent Neural Network (RNN). Inspired by the success of the Transformer architecture in machine translation, this research focuses on developing the image captioning model using Transformer architecture in Bahasa. The Transformer architecture accelerates the learning process since it only uses attention mechanisms without relying on RNN. Moreover, we use EfficientNet, one of the state-of-the-art architectures to extract features of the images. We use the MS COCO 2014 dataset with translated captions (in Bahasa). We did several experiments through hyperparameter tuning and selected the best model. The best model in our study gives a BLEU-{1,2,3,4} score of {77.42, 67.11, 60.52, 50.46}. The evaluation score indication and the inference result of the model, developed by EfficientNet-Transformer architecture, gives a very good, generated caption result.

Oboyob: A sequential-semantic Bengali image captioning engine

Journal of Intelligent & Fuzzy Systems, 2019

Understanding the context with generation of textual description from an input image is an active and challenging research topic in computer vision and natural language processing. However, in the case of Bengali language, the problem is still unexplored. In this paper, we address a standard approach for Bengali image caption generation though subsampling the machine translated dataset. Later, we use several pre-processing techniques with the state-of-the-art CNN-LSTM architecturebased models. The experiment is conducted on standard Flickr-8K dataset, along with several modifications applied to adapt with the Bengali language. The training caption subsampled dataset is computed for both Bengali and English languages for further experiments with 16 distinct models developed in the entire training process. The trained models for both languages are analyzed with respect to several caption evaluation metrics. Further, we establish a baseline performance in Bengali image captioning defining the limitation of current word embedding approaches compared to internal local embedding.

Attention Based Image Caption Generation (ABICG) using Encoder-Decoder Architecture

2023 5th International Conference on Smart Systems and Inventive Technology (ICSSIT), 2023

The image captioning is utilized to develop the explanations of the sentences describing the series of scenes captured in the image or picture forms. The practice of using image captioning is vast although it is a tedious task for the machine to learn what a human is capable of. The model must be built in a way such that when it reads the scene, it recognizes and reproduce to-the-point captions or descriptions. The generated descriptions must be semantically and syntactically accurate. Hence, availability of Artificial Intelligence (AI) and Machine Learning algorithms viz. Natural Language Processing (NLP), Deep Learning (DL) etc. makes the task easier. In the proposed paper, anew introduction to attention mechanism called Bahdanau's along with Encoder-Decoder architecture is being used so as to reflect the captions of the image. A pre-trained Convolutional Neural Network (CNN) called InceptionV3 architecture is used to gather the features of images and then a Recurrent Neural Network (RNN) called Gated Recurrent Unit (GRU) architecture so as to develop captions is utilized. The results obtained from this model is trained on Flickr8k dataset with the improvement in accuracy around 10% with the present state of the art.

Automated Image Captioning -Model Based on CNN -GRU Architecture

IRJET, 2022

The images when presented to humans can easily identify the objects, relationships among the objects of the given image. Similarly, Image captioning is the model in the AI sector, where the image is sent as input, and then the objects, relationship between the objects are generated as captions. The model is based on encoder decoder architecture which is trained on flickr30k dataset. This paper describes the deep insights of deep learning techniques used for caption generation with convolutional neural networks and recurrent neural networks. The results are then translated into Hindi. The model is prepared with focus to help visually impaired people with voice assistant functionality.