Daniel Neagu - Academia.edu (original) (raw)

Daniel Neagu

Related Authors

Mohammed Mayhoub

Don Ross

Katherine Butler Schofield

Egil Bakka

Egil Bakka

Norwegian University of Science and Technology

Armando Marques-Guedes

Giulia Sissa

Elisabetta Di Stefano

Iolanda Craifaleanu

Fabián Hoyos Patiño

Claudia Speciale

Uploads

Papers by Daniel Neagu

Research paper thumbnail of Investigating the relationship between the distribution of local semantic concepts and local keypoints for image annotation

2014 14th Uk Workshop on Computational Intelligence, Sep 1, 2014

Research paper thumbnail of Integration strategies for toxicity data from an empirical perspective

2014 14th Uk Workshop on Computational Intelligence, Sep 1, 2014

Research paper thumbnail of B. Reusch (Ed.): Fuzzy Days 2001, LNCS 2206, pp. 152--161, 2001

Research paper thumbnail of Spatial pyramid local keypoints quantization for bag of visual patches image representation

2010 10th International Conference on Intelligent Systems Design and Applications, 2010

Bag of visual patches (BOP) image representation has been the main research topic in computer vis... more Bag of visual patches (BOP) image representation has been the main research topic in computer vision literature for scene and object recognition tasks. Building visual vocabularies from local image feature vectors extracted automatically from images have direct effect on producing discriminative visual patches. Local image features hold important information of their locations in the image which are ignored during quantization process to build visual vocabularies. In this paper, we propose Spatial Pyramid Vocabulary Model (SPVM) to build visual vocabularies from local image features at pyramid level. We show, with experiments on multi-class classification task using 700 natural scene images, that the spatial pyramid vocabulary model is suitable and discriminative for bag-of-visual patches semantic image representation compared to using universal vocabulary model (UVM).

Research paper thumbnail of Fusing integrated visual vocabularies-based bag of visual words and weighted colour moments on spatial pyramid layout for natural scene image classification

The bag of visual words (BOW) model is an efficient image representation technique for image cate... more The bag of visual words (BOW) model is an efficient image representation technique for image categorisation and annotation tasks. Building good visual vocabularies, from automatically extracted image feature vectors, produces discriminative visual words which can improve the accuracy of image categorisation tasks. Most approaches that use the BOW model in categorising images ignore useful information that can be obtained from image classes to build visual vocabularies. Moreover, most BOW models use intensity features extracted from local regions and disregard colour information which is an important characteristic of any natural scene image. In this paper we show that integrating visual vocabularies generated from each image category, improves the BOW image representation and improves accuracy in natural scene image classification. We use a keypoints densitybased weighting method, to combine the BOW representation with image colour information on a spatial pyramid layout. In addition, we show that visual vocabularies generated from training images of one scene image dataset, can plausibly represent another scene image dataset on the same domain. This helps in reducing time and effort needed to build new visual vocabularies. The proposed approach is evaluated over three well-known scene classification datasets with 6, 8 and 15 scene categories respectively using 10-fold crossvalidation. The experimental results, using support vector machines with histogram intersection kernel, show that the proposed approach outperforms baseline methods such as Gist features, rgbSIFT features and different configurations of the BOW model.

Research paper thumbnail of Investigating the relationship between the distribution of local semantic concepts and local keypoints for image annotation

2014 14th Uk Workshop on Computational Intelligence, Sep 1, 2014

Research paper thumbnail of Integration strategies for toxicity data from an empirical perspective

2014 14th Uk Workshop on Computational Intelligence, Sep 1, 2014

Research paper thumbnail of B. Reusch (Ed.): Fuzzy Days 2001, LNCS 2206, pp. 152--161, 2001

Research paper thumbnail of Spatial pyramid local keypoints quantization for bag of visual patches image representation

2010 10th International Conference on Intelligent Systems Design and Applications, 2010

Bag of visual patches (BOP) image representation has been the main research topic in computer vis... more Bag of visual patches (BOP) image representation has been the main research topic in computer vision literature for scene and object recognition tasks. Building visual vocabularies from local image feature vectors extracted automatically from images have direct effect on producing discriminative visual patches. Local image features hold important information of their locations in the image which are ignored during quantization process to build visual vocabularies. In this paper, we propose Spatial Pyramid Vocabulary Model (SPVM) to build visual vocabularies from local image features at pyramid level. We show, with experiments on multi-class classification task using 700 natural scene images, that the spatial pyramid vocabulary model is suitable and discriminative for bag-of-visual patches semantic image representation compared to using universal vocabulary model (UVM).

Research paper thumbnail of Fusing integrated visual vocabularies-based bag of visual words and weighted colour moments on spatial pyramid layout for natural scene image classification

The bag of visual words (BOW) model is an efficient image representation technique for image cate... more The bag of visual words (BOW) model is an efficient image representation technique for image categorisation and annotation tasks. Building good visual vocabularies, from automatically extracted image feature vectors, produces discriminative visual words which can improve the accuracy of image categorisation tasks. Most approaches that use the BOW model in categorising images ignore useful information that can be obtained from image classes to build visual vocabularies. Moreover, most BOW models use intensity features extracted from local regions and disregard colour information which is an important characteristic of any natural scene image. In this paper we show that integrating visual vocabularies generated from each image category, improves the BOW image representation and improves accuracy in natural scene image classification. We use a keypoints densitybased weighting method, to combine the BOW representation with image colour information on a spatial pyramid layout. In addition, we show that visual vocabularies generated from training images of one scene image dataset, can plausibly represent another scene image dataset on the same domain. This helps in reducing time and effort needed to build new visual vocabularies. The proposed approach is evaluated over three well-known scene classification datasets with 6, 8 and 15 scene categories respectively using 10-fold crossvalidation. The experimental results, using support vector machines with histogram intersection kernel, show that the proposed approach outperforms baseline methods such as Gist features, rgbSIFT features and different configurations of the BOW model.

Log In