Syntharch: Interactive Image Search with Attribute-Conditioned Synthesis (original) (raw)

FISH: a practical system for fast interactive image search in huge databases

2008

The problem of search and retrieval of images using relevance feedback has attracted tremendous attention in recent years from the research community. A real-world-deployable interactive image retrieval system must (1) be accurate, (2) require minimal user-interaction, (3) be efficient, (4) be scalable to large collections (millions) of images, and (5) support multi-user sessions. For good accuracy, we need effective methods for learning the relevance of image features based on user feedback, both within a user-session and across sessions. Efficiency and scalability require a good index structure for retrieving results. The index structure must allow for the relevance of image features to continually change with fresh queries and user-feedback. The state-of-the-art methods available today each address only a subset of these issues. In this paper, we build a complete system FISH -Fast Image Search in Huge databases. In FISH, we integrate selected techniques available in the literature, while adding a few of our own. We perform extensive experiments on real datasets to demonstrate the accuracy, efficiency and scalability of FISH. Our results show that the system can easily scale to millions of images while maintaining interactive response time.

Interactive visual information retrieval

2000

The need to retrieve visual information from large image and video collections is shared by many application domains. This paper describes the main features of our image search engine of Quicklook. Quicklook allows the user to query image and video databases with the aid of example images or a user-made sketch, and progressively refine the system's response by indicating the relevance, or nonrelevance of the retrieved items.

Visual information retrieval using synthesized imagery

2007

In this project (VIRSI) we investigate the promising contentbased retrieval paradigm known as interactive search or relevance feedback, and aim to extend it through the use of synthetic imagery. In relevance feedback methods, the user himself is a key factor in the search process as he provides positive and negative feedback on the results, which the system uses to iteratively improve the set of candidate results. In our approach we closely integrate the generation of synthetic imagery in the relevance feedback process through a new fundamental paradigm: Artificial Imagination (AIm).

IJERT-Image Retrieval with Interactive Query Description and Database Revision

International Journal of Engineering Research and Technology (IJERT), 2014

https://www.ijert.org/image-retrieval-with-interactive-query-description-and-database-revision https://www.ijert.org/research/image-retrieval-with-interactive-query-description-and-database-revision-IJERTV3IS031148.pdf Users desire to have systems that understand human intentions and behave accordingly.In order to retrieve the most similar images from a large group of images, content-based image retrieval (CBIR) systems are used. Efficient image retrieval is challenging when the user is very specific about the image content and the database contains a wide variety of images.An interactive image retrieval system that gives satisfactory results intelligently is proposed here. Novel methods of feature levels and Database Revision (DR) are used for fast and precise retrieval. To obtain fast results from large databases, the retrieval of images based on various features of the content is classified into a number of levels and the database if revised accordingly by discarding images that does not qualify within a cutoff. With Query Description (QD) the users can constantly describe the query image by selecting relevant images from the results, based on which the further retrieval process progresses and database rewritten accordingly.This allows the users to control the direction of search and obtain satisfactory results.

An image retrieval system with automatic query modification

IEEE Transactions on Multimedia, 2002

Most interactive "query-by-example" based image retrieval systems utilize relevance feedback from the user for bridging the gap between the user's implied concept and the low-level image representation in the database. However, traditional relevance feedback usage in the context of content-based image retrieval (CBIR) may not be very efficient due to a significant overhead in database search and image download time in client-server environments. In this paper, we propose a CBIR system that efficiently addresses the inherent subjectivity in user perception during a retrieval session by employing a novel idea of intra-query modification and learning. The proposed system generates an object-level view of the query image using a new color segmentation technique. Color, shape and spatial features of individual segments are used for image representation and retrieval.

Interactive content-based image retrieval using relevance feedback

2002

Database search engines are generally used in a one-shot fashion in which a user provides query information to the system and, in return, the system provides a number of database instances to the user. A relevance feedback system allows the user to indicate to the system which of these instances are desirable, or relevant, and which are not. Based on this feedback, the system modifies its retrieval mechanism in an attempt to return a more desirable instance set to the user.

Interactive image retrieval using text and image content

The current image retrieval systems are successful in retrieving images, using keyword based approaches. However, they are incapable to retrieve the images which are context sensitive and annotated inappropriately. Content-Based Image Retrieval (CBIR) aims at developing techniques that support effective searching and browsing of large image repositories, based on automatically derived image features. The current CBIR systems suffer from the semantic gap. Though a user feedback is suggested as a remedy to this problem, it often leads to distraction in the search. To overcome these disadvantages, we propose a novel interactive image retrieval system, integrating text and image content to enhance the retrieval accuracy. Also we propose a novel refining search algorithm to narrow down the search further from the retrieved images. The experimental results demonstrate the performance of the proposed system.

Filter image browsing: Interactive image retrieval by using database overviews

2001

Human-computer interaction is a decisive factor in effective content-based access to large image repositories. In current image retrieval systems the user refines his query by selecting example images from a relevance ranking. Since the top ranked images are all similar, user feedback often results in rearrangement of the presented images only. For better incorporation of user interaction in the retrieval process, we have developed the Filter Image Browsing method. It also uses feedback through image selection. However, it is based on differences between images rather than similarities. Filter Image Browsing presents overviews of relevant parts of the database to users. Through interaction users then zoom in on parts of the image collection. By repeatedly limiting the information space, the user quickly ends up with a small amount of relevant images. The method can easily be extended for the retrieval of multimedia objects. For evaluation of the Filter Image Browsing retrieval concept, a user simulation is applied to a pictorial database containing 10,000 images acquired from the World Wide Web by a search robot. The simulation incorporates uncertainty in the definition of the information need by users. Results show Filter Image Browsing outperforms plain interactive similarity ranking in required effort from the user. Also, the method produces predictable results for retrieval sessions, so that the user quickly knows if a successful session is possible at all. Furthermore, the simulations show the overview techniques are suited for applications such as hand-held devices where screen space is limited.

MirBot: A Multimodal Interactive Image Retrieval System

Lecture Notes in Computer Science, 2013

This study presents a multimodal interactive image retrieval system for smartphones (MirBot). The application is designed as a collaborative game where users can categorize photographs according to the WordNet hierarchy. After taking a picture, the region of interest of the target can be selected, and the image information is sent with a set of metadata to a server in order to classify the object. The user can validate the category proposed by the system to improve future queries. The result is a labeled database with a structure similar to ImageNet, but with contents selected by the users, fully marked with regions of interest, and with novel metadata that can be useful to constrain the search space in a future work. The MirBot app is freely available on the Apple app store.