Demonstration of an Open Source Framework for Qualitative Evaluation of CBIR Systems (original) (raw)
2018, Proceedings of the 26th ACM international conference on Multimedia
Evaluating image retrieval systems in a quantitative way, for example by computing measures like mean average precision, allows for objective comparisons with a ground-truth. However, in cases where ground-truth is not available, the only alternative is to collect feedback from a user. us, qualitative assessments become important to be er understand how the system works. Visualizing the results could be, in some scenarios, the only way to evaluate the results obtained and also the only opportunity to identify that a system is failing. is necessitates developing a User Interface (UI) for a Content Based Image Retrieval (CBIR) system that allows visualization of results and improvement via capturing user relevance feedback. A well-designed UI facilitates understanding of the performance of the system, both in cases where it works well and perhaps more importantly those which highlight the need for improvement. Our open-source system implements three components to facilitate researchers to quickly develop these capabilities for their retrieval engine. We present: a web-based user interface to visualize retrieval results and collect user annotations; a server that simpli es connection with any underlying CBIR system; and a server that manages the search engine data. e so ware itself is described in a separate submission to the ACM MM Open Source So ware Competition.