The Assignment of Tags to Images in Internet: Language Skill Evaluation for Tag Recommendation (original) (raw)
Related papers
"Image tagging in Internet is becoming a crucial aspect in the search activity of many users all over the world, as online content evolves from being mainly text based, to being multi-media based (text, images, sound, …). In this paper we present a study carried out for native and non native English language taggers, with the objective of providing user support depending on the detected language skills and characteristics of the user. In order to do this, we analyze the differences between how users tag objectively (using what we call ‘see’ type tags) and subjectively (by what we call ‘evoke’ type tags). We study the data using bivariate correlation, visual inspection and rule induction. We find that the objective/subjective factors are discriminative for native/non native users and can be used to create a data model. This information can be utilized to help and support the user during the tagging process."
Image Tagging in the Spanish Language in Internet-A User Study and Data Analysis
Web Congress, 2009. LE- …, 2009
Authors: David F. Nettleton, Mari-Carmen Marcos, Bartolomé Mesa-Lao. In LA-WEB '09 Proc. 2009 Latin American Web Congress, pp. 120-127. IEEE Computer Society Washington, DC, USA. Users who tag images in Internet using the Spanish language can be from different Spanish speaking countries in the world. Different countries with different cultures, variations in vocabulary and forms of expression, which can influence in their choice of tags while tagging images. Also, they can have different levels of tagging skills, in semantic terms (diversity of vocabulary) and syntactic terms (errors incurred while defining the tags). In this paper we present a study carried out for natives of different Spanish speaking countries, with the objective of providing user support depending on the detected language skills and characteristics of the user. Using the syntactic and semantic factors we can profile users in terms of their skill level and other characteristics, and then we can use these profiles to offer customized support for image tagging in the Spanish language.
The Effect of Automatic Tagging Using Multilingual Metadata for Image Retrieval
Advances in Intelligent Systems and Computing, 2018
One of multimedia content acquisition method on the Web is using tag which is describe of content. However, different words may be tagged to the same or similar subjects, because tagging is performed in an essentially arbitrary manner by a human. In addition, tags are provided in specific language from the content. This model proposes the method of automatically tagging which compensate insufficient described content by using multilanguage. Our experimental evaluations showed that users could obtain more appropriate images with this method.
Social tagging provides valuable and crucial information for large-scale web image retrieval. It is ontology-free and easy to obtain; however, irrelevant tags frequently appear, and users typically will not tag all semantic objects in the image, which is also called semantic loss. To avoid noises and compensate for the semantic loss, tag recommendation is proposed in literature. However, current recommendation simply ranks the related tags based on the single modality of tag co-occurrence on the whole dataset, which ignores other modalities, such as visual correlation. This paper proposes a multi-modality recommendation based on both tag and visual correlation, and formulates the tag recommendation as a learning problem. Each modality is used to generate a ranking feature, and Rankboost algorithm is applied to learn an optimal combination of these ranking features from different modalities. Experiments on Flickr data demonstrate the effectiveness of this learning-based multi-modality recommendation strategy.
Comparing the Language Used in Flickr, General Web Pages, Yahoo Images and Wikipedia
Words can be associated with images in different ways. Google and Yahoo use text found around a photo on a web page, Flickr image uploaders add their own tags. How do the annotations differ when they are extracted from text and when they are manually created? How does these language populations compare to written text? Here we continue our exploration of the differences in these languages.
Journal of the Association for Information Science and Technology, 2014
Crowdsourcing has been emerging to harvest social wisdom from thousands of volunteers to perform series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design to user tagging behavior. While images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering an insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon's Mechanical Turk's crowdworkers -with and without image descriptions.
2010
Exploiting the cumulative behavior of users is a common technique used to improve many popular online services. We build a tag spell checker using a graph-based model. In particular, we present a novel technique based on the graph of tags associated with objects made available by online sites such as Flickr and YouTube. We show the effectiveness of our approach on the basis of an experimentation done on real-world data. We show a precision of up to 93% with a recall (ie, the number of errors detected) of up to 100%.
The accuracy and value of machine-generated image tags
Proceedings of the ACM International Conference on Image and Video Retrieval - CIVR '10, 2010
Automated image tagging is a problem of great interest, due to the proliferation of photo sharing services. Researchers have achieved considerable advances in understanding motivations and usage of tags, recognizing relevant tags from image content, and leveraging community input to recommend more tags. In this work we address several important issues in building an end-to-end image tagging application, including tagging vocabulary design, taxonomy-based tag refinement, classifier score calibration for effective tag ranking, and selection of valuable tags, rather than just accurate ones. We surveyed users to quantify tag utility and error tolerance, and use this data in both calibrating scores from automatic classifiers and in taxonomy based tag expansion. We also compute the relative importance among tags based on user input and statistics from Flickr. We present an end-toend system evaluated on thousands of user-contributed photos using 60 popular tags. We can issue four tags per image with over 80% accuracy, up from 50% baseline performance, and we confirm through a comparative user study that valueranked tags are preferable to accuracy-ranked tags.
This study aims to consider the value of user-assigned image tags by comparing the facets that are represented in image tags with those that are present in image queries to see if there is a similarity in the way that users describe and search for images A sample dataset was created by downloading a selection of images and associated tags from Flickr, the online photo-sharing web site. The tags were categorised using image facets from Shatford’s matrix, which has been widely used in previous research into image indexing and retrieval. The facets present in the image tags were then compared with the results of previous research into image queries. The results reveal that there are broad similarities between the facets present in image tags and queries, with people and objects being the most common facet, followed by location. However, the results also show that there are differences in the level of specificity between tags and queries, with image tags containing more generic terms and image queries consisting of more specific terms. The study concludes that users do describe and search for images using similar image facets, but that measures to close the gap between specific queries and generic tags would improve the value of user tags in indexing image collections. Research into tagging has tended to focus on textual resources with less research into non-textual documents. In particular, little research has been undertaken into how user tags compare to the terms used in search queries, particularly in the context of digital images.