Parallel Computing in Face Image Retrieval (original) (raw)
Related papers
A Parallel Implementation of Content-Based Image Retrieval: Final Project Report
2008
Content-based image retrieval has many applications but remains a computationally intensive task. This is mainly due to the large size of an image database required for practical use. Our project aims to examine existing CBIR implementations and improve upon them using a parallel computing approach. Both parallelized offline feature extraction and online query process are implemented on the Beowulf cluster in Univ. of Cincinnati. Optimization and evaluation are also performed.
Parallelizing Image Processing Algorithms for Face Recognition on Multicore Platforms
Journal, 2022
A good face detection system should have the ability to identify objects with varying degrees of illumination and orientation. It should also be able to respond to all the possible variations in the image. The image of the face depends on the relative camera face pose such as the nose and one eye. The appearance of a face is directly influenced by the facial expression of a person and partially occluded by objects around it. One of the most important and necessary conditions for face recognition is to exclude the background of reliable face classification techniques. However, the face can appear in complex backgrounds and different positions. The face recognition system can mistake some areas of the background for faces. This paper solves some face recognition problems including segmenting, extracting and identifying facial features that are thought to face from the background.
In this research ,we develop a facial image retrieval computational model for the problem of facial images retrieval by integrating content-based image retrieval (CBIR) techniques and face recognition techniques(FERET) , with the verbal description of the semantic features of the facial image . Eigenfaces is applied to extract the characteristic feature images of the human face images. One hundred participant participated to choose , order and annotate the semantic features of the human face based on the importance of each features to differentiate between human faces . During the retrieval process system use the specific semantic features , of the face that is user looking for , to narrow down the search space . Eigenfaces is then projected onto the narrowed down human faces search space to identify and retrieve the similar faces to the query face from the database .Euclidean distance is used for classification purpose . The database that is used consists of 1500 local facial images database of one hundred and fifty participants from the University of Malaya (UM), Kuala Lumpur,and some of their friends and families outside the UM.The proposed human facial image retrieval is evaluated through several experiments. Precision and Recall are used for results evaluation .The results are encouraging comparing to typical facial image retrieval techniques.
FaceMatch: Real-World Face Image Retrieval
Communications in Computer and Information Science, 2017
We researched and developed a practical methodology for face and image retrieval (FIR) based on optimally weighted image descriptor ensemble. We describe a single-image-per-person (SIPP) face image retrieval system for real-world applications that include large photo collection search, person location in disaster scenarios, semi-automatic image data annotation, etc. Our system provides efficient means for face detection, matching and annotation, working with unconstrained digital photos of variable quality, requiring no time-consuming training, yet showing a commercial performance level at its sub-tasks. Our system benefits public by providing practical FIR technology, annotated image data and web-services to a real-world family reunification system.
DiFace: A Face-based Video Retrieval System with Distributed Computing
With the prevalence of video surveillance and the extraordinary number of online video resources, the demand for effective and efficient content-based video analysis tools has shown significant growth in recent years. Human face has always been one of the most important interest points in automatic video analysis. In this paper, we designed a face-based video retrieval system. We analyzed the three key issues in constructing such systems: frame extraction based on face detection, key frame selection based on face tracking and relevant video retrieval using PCA-based face matching. In order to cope with the huge number of videos, we implemented a prototype system on the Hadoop distributed computing framework: DiFace. We populated the system with a baseline dataset consisting of TED talk fragments, provided by the 2014 Chinese national big data contest. Empirical experimental results showed the effectiveness of the system architecture and also the techniques employed.
Facefetch: An Efficient and Scalable Face Retrieval System That uses Your Visual Memory
2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM), 2019
Often in many situations in our life, we wish to envision the person we met but we could not recall what the person exactly look like except for a slight impression of the face. Yugo Sato et.al. introduced a face retrieval system for this problem, which utilises visual inputs from the users and attempts to retrieve the target face. The major drawback of their approach was that their system was slow and only applicable for small databases like Chicago Face Database. In this paper, we introduce a robust and scalable face retrieval system that is capable of retrieving the envisioned face from a large-scale database. Furthermore, instead of information specific to the target, our face retrieval system asks the user to select common face attributes that they remember their target face had, using which the system filters out the irrelevant faces thus speeding up the search process. Then our system asks the user to select several images that resembles with the envisioned face. On the basis of this selection, our system automatically reduces the "semantic gap" between human description and the computer based description of the target image. In order to evaluate our system, We conducted user studies on a large-scale database and established that our framework succeeded in beating the state of the art results in this particular task and thus proved itself to be very effective for retrieving the envisioned face image in approximately half the total number of search iterations and taking one-third of the overall search time thereby putting much less burden on the user.
Face-based image retrieval-one step toward object-based image retrieval
2005
In this paper we propose a method to retrieve images based on the persons shown. The method aims at retrieving from images showing groups of people those in which the same persons are depicted as in the query image. It is experimentally shown that this aim is achieved for rather simple tasks and that improvements over baseline methods are possible for harder tasks.
An image retrieval system is a computer system for browsing, searching and retrieving images from a large database of digital images. Most traditional and common methods of image retrieval utilize some method of adding metadata such as captioning keywords, or descriptions to the images so that retrieval can be performed over the annotation words. Content-based face image retrieval is an existing technology for retrieving of images from large data base. Often, the retrieval results are not very satisfactory.The proposed method reduces the error and achieves better gain in face retrieval. By combining two orthogonal methods, results in improving of content-based face image retrieval.The proposed scheme uses two orthogonal methods named attribute-enhanced sparse coding and attribute embedded inverted indexing in order to reduce errors and achieve better extraction of images. These methods are used to improve the face retrieval in the offline stage such that the features of the image is extracted and then compared with the images present in the database. INDEX TERMS: Face feature, Content-based image retrieval
2013
The significant increase in the huge collections of digital images and videos that need to be managed has led to the requirement for efficient methods for the archival and retrieval of these images. Facial images have gained its importance amongst these digital images due to its use in various aspects of life such as, in airports, law enforcement applications, security systems and automated surveillance applications. The basic content-based image retrieval (CBIR) system used for the general task of image retrieval is not effective with facial images, especially when the query is in some form of user descriptions. The current CBIR is based on low-level features such as color, texture, shape, and eigenfaces thus it cannot capture the semantic aspects of a facial image. Humans by nature tend to use semantic descriptions (high-level feature) when describing what they are looking for, and they normally encounter difficulties when using descriptions based on low-level features. This is be...
Shared-memory parallelization for content-based image retrieval
2006
In this paper we show how modern shared-memory parallelization techniques can gain nearly linear speedup in content-based image retrieval. Using OpenMP, few changes are applied to the source code to enable the exploitation of the capabilities of current multi-core/multiprocessor systems. These techniques allow the use of computationally expensive methods in interactive retrieval scenarios which has not been possible so far. In addition, these ideas were applied to a clustering algorithm where substantial performance improvements could be observed as well.